Minor point: I find Julie-and-Mark-like examples silly because they ask for a moral intuition about a case where the outcome is predefined. Our moral intuition makes arguments of the form “behavior X usually leads to a bad outcome, therefore X is wrong”. So if the outcome is already specified, the intuition has nothing to say; nor would we expect it to, since the whole point of morality is to help you make decisions between live possibilities, so why should it have anything to say about a situation that has already happened/cannot be altered?
Or to put it another way, I’m surprised no one said something to the effect of “Julie and Mark shouldn’t have had sex because at the time they did they had no way of knowing that it would turn out well, and in fact every reason to believe it would turn out very badly, based on the experiences of other incestuous siblings.”
For concreteness, imagine a different story where Julie and Mark decide to play Russian roulette in their cabin (again, just for fun). They both miss the bullet, no harm results, and they never tell anyone etc. etc. So what was wrong with their actions?
I think most people would be able to handle that one very quickly. So the really interesting question is why no-one comes up with such an explanation in the incest case.
Expecting evolved moral instincts to conform exactly to some simple unifying principle is like expecting the orbits of the planets to be in the same proportion as the first 9 prime numbers or something. That which is produced by a complex, messy, random process is unlikely to have some low complexity description.
An interesting analogy. I mean, who would predict something crazy like the square of the orbital period being proportional to the cube of the orbital radius?
Obviously there’s no unifying principle in all that messy moral randomness. No hidden laws, just waiting to be discovered...
Right, I phrased that very badly. What I was trying to say is that the moral intuition was trained (evolutionarily or whatever) to map from behaviors to right/wrongness based on the (weighted) set of possible outcomes. So when we’re given a behavior, the intuition spits out a right/wrong decision based on what was likely to have happened, not considering what was stipulated in the problem to have actually happened.
See, and I was going to write that your second paragraph was more insightful and didn’t really follow from your first paragraph. I was going to say that it seemed like that was what moral intuition was actually calculating inaccessibly, so it is indeed interesting that (as far as Greene reports) nobody does come out with it as a rationalization. But then I held off, because I thought that I was just projecting my own thought process onto your words, and you might have meant something more in line with your first paragraph by them.
...because they ask for a moral intuition about a case where the outcome
is predefined.
One thing i found a bit dodgy about that example is that it just asserts that the outcomes were positive.
I would bet that, for the respondents, simply being told that the outcomes were positive would still have left them feeling that in a real brother-sister situation like that there would likely have likely been some negative consequences.
Greene does not seem to factor this into account when he interprets their responses.
Agreed. Morality is for determining what one has most reason to do or want. Clearly asking after the fact “did they do the wrong thing?” doesn’t mesh well with what morality is for. But the finger-wagging sorts of moralists might not agree.
Who says what morality is for? People have moral instincts which are used, often than not, to evaluate already finished actions on good-bad scale. People engaged in actions evaluated as wrong tend to be labeled as bad people in consequence. We encounter this use of morality every day. Maybe you claim that morality should be used differently, but that’s your (meta-)moral judgement (prescriptive statement), while the original post and the thesis it refered to were descriptive about morality (and I think accurate).
Minor point: I find Julie-and-Mark-like examples silly because they ask for a moral intuition about a case where the outcome is predefined. Our moral intuition makes arguments of the form “behavior X usually leads to a bad outcome, therefore X is wrong”. So if the outcome is already specified, the intuition has nothing to say; nor would we expect it to, since the whole point of morality is to help you make decisions between live possibilities, so why should it have anything to say about a situation that has already happened/cannot be altered?
Or to put it another way, I’m surprised no one said something to the effect of “Julie and Mark shouldn’t have had sex because at the time they did they had no way of knowing that it would turn out well, and in fact every reason to believe it would turn out very badly, based on the experiences of other incestuous siblings.”
For concreteness, imagine a different story where Julie and Mark decide to play Russian roulette in their cabin (again, just for fun). They both miss the bullet, no harm results, and they never tell anyone etc. etc. So what was wrong with their actions?
I think most people would be able to handle that one very quickly. So the really interesting question is why no-one comes up with such an explanation in the incest case.
An interesting analogy. I mean, who would predict something crazy like the square of the orbital period being proportional to the cube of the orbital radius?
Obviously there’s no unifying principle in all that messy moral randomness. No hidden laws, just waiting to be discovered...
I think the whole point is that our moral intuition doesn’t make arguments—we have them, and then we come up with rationalizations ex post facto.
But empirically, intuition really did prompt lots of people to classify the incest as a moral transgression.
Right, I phrased that very badly. What I was trying to say is that the moral intuition was trained (evolutionarily or whatever) to map from behaviors to right/wrongness based on the (weighted) set of possible outcomes. So when we’re given a behavior, the intuition spits out a right/wrong decision based on what was likely to have happened, not considering what was stipulated in the problem to have actually happened.
See, and I was going to write that your second paragraph was more insightful and didn’t really follow from your first paragraph. I was going to say that it seemed like that was what moral intuition was actually calculating inaccessibly, so it is indeed interesting that (as far as Greene reports) nobody does come out with it as a rationalization. But then I held off, because I thought that I was just projecting my own thought process onto your words, and you might have meant something more in line with your first paragraph by them.
One thing i found a bit dodgy about that example is that it just asserts that the outcomes were positive.
I would bet that, for the respondents, simply being told that the outcomes were positive would still have left them feeling that in a real brother-sister situation like that there would likely have likely been some negative consequences.
Greene does not seem to factor this into account when he interprets their responses.
Agreed. Morality is for determining what one has most reason to do or want. Clearly asking after the fact “did they do the wrong thing?” doesn’t mesh well with what morality is for. But the finger-wagging sorts of moralists might not agree.
Who says what morality is for? People have moral instincts which are used, often than not, to evaluate already finished actions on good-bad scale. People engaged in actions evaluated as wrong tend to be labeled as bad people in consequence. We encounter this use of morality every day. Maybe you claim that morality should be used differently, but that’s your (meta-)moral judgement (prescriptive statement), while the original post and the thesis it refered to were descriptive about morality (and I think accurate).
True, although finger-waggers do say things like “Well sure, it might have turned out okay this time. But that doesn’t mean it was a good idea.”