To the extent your question is, “Suppose X is the correct answer. Is X the correct answer?”, X is the correct answer. Outside of that supposition it probably isn’t.
I don’t think that’s what I’m asking. Here’s an analogy. A person X comes to the conclusion fairly late in life that the morally best thing they can think of to do is to kill themselves in a way that looks like an accident and will their sizable life insurance policy to charity. This conclusion isn’t a reducto ad absurdum of X’s moral philosophy, even if X doesn’t like it. Regardless of this particular example, it could presumably be correct for a person to sacrifice themselves in a way that doesn’t feel heroic, isn’t socially accepted, and doesn’t save the whole world but maybe only a few far-away people. I think most people in such a situation (who managed not to rationalize the dilemma away) would probably not do it.
So I’m trying to envision the same situation for humanity as a whole. Is there any situation that humanity could face that would make us collectively say “Yeah doing Y is right, even though it seems bad for us. But the sacrifice is too great, we aren’t going to do it”. That is, if there’s room for space between “considered morality” and “desires” for an individual, is there room for space between them for a species?
Is there any situation that humanity could face that would make us collectively say “Yeah doing Y is right, even though it seems bad for us. But the sacrifice is too great, we aren’t going to do it”
This is still probably not the question that you want to ask. Humans do incorrect things all the time, with excellent rationalizations, so “But the sacrifice is too great, we aren’t going to do it” is not a particularly interesting specimen. To the extent that you think that “But the sacrifice is too great” is a relevant argument, you think that “Yeah doing Y is right” is potentially mistaken.
I guess the motivation for this post is in asking whether it is actually possible for a conclusion like that to be correct. I expect it might be, mainly because humans are not particularly optimized thingies, so it might be more valuable to use the atoms to make something else that’s not significantly related to the individual humans. But again to emphasize the consequentialist issue: to the extent such judgment is correct, it’s incorrect to oppose it; and to the extent it’s correct to oppose it, the judgment is incorrect.
“But the sacrifice is too great” is a relevant argument, you think that “Yeah doing Y is right” is potentially mistaken.
I think I disagree with this. On a social and political level, the tendency to rationalize is so pervasive it would sound completely absurd to say “I agree that it would be morally correct to implement your policy but I advocate not doing it, because it will only help future generations, screw those guys.” In practice, when people attempt to motivate each other in the political sphere to do something, it is always accompanied by the claim that doing that thing is morally right. But it is in principle possible to try to get people not to do something by arguing “hey this is really bad for us!” without arguing against it’s moral rightness. This thought experiment is a case where this exact “lets grab the banana” position is supposed to be tempting.
People aren’t motivated by morality alone—people aren’t required to do what they recognize to be morally correct.
e.g. a parent may choose their kid’s life over the lives of a hundred other children. Because they care more about their own child—not because they think it’s the morally correct thing to do.
Our moral sense is only one of the many things that motivate us.
Our moral sense is only one of the many things that motivate us.
I’m talking about extrapolated morality, which is not the same thing as moral sense (i.e. judgments accessible on human level without doing much more computation). This extrapolated morality determines what should motivate you, but of course it’s not what does motivate you, and neither is non-extrapolated moral sense. In this sense it’s incorrect to oppose extrapolated morality (you shouldn’t do it), but you are in actuality motivated by other things, so you’ll probably act incorrectly (in this sense).
In what sense ‘should’ individuals be motivated by their CEV rather than by their non-CEV preferences? Wouldn’t breaking down the word ‘should’ in that previous sentence give you “Individuals want to achieve a state whereby they want to achieve what a perfect version of themselves would want to achieve rather than what they want to achieve”? Isn’t that vaguely self-defeating?
To the extent your question is, “Suppose X is the correct answer. Is X the correct answer?”, X is the correct answer. Outside of that supposition it probably isn’t.
I don’t think that’s what I’m asking. Here’s an analogy. A person X comes to the conclusion fairly late in life that the morally best thing they can think of to do is to kill themselves in a way that looks like an accident and will their sizable life insurance policy to charity. This conclusion isn’t a reducto ad absurdum of X’s moral philosophy, even if X doesn’t like it. Regardless of this particular example, it could presumably be correct for a person to sacrifice themselves in a way that doesn’t feel heroic, isn’t socially accepted, and doesn’t save the whole world but maybe only a few far-away people. I think most people in such a situation (who managed not to rationalize the dilemma away) would probably not do it.
So I’m trying to envision the same situation for humanity as a whole. Is there any situation that humanity could face that would make us collectively say “Yeah doing Y is right, even though it seems bad for us. But the sacrifice is too great, we aren’t going to do it”. That is, if there’s room for space between “considered morality” and “desires” for an individual, is there room for space between them for a species?
This is still probably not the question that you want to ask. Humans do incorrect things all the time, with excellent rationalizations, so “But the sacrifice is too great, we aren’t going to do it” is not a particularly interesting specimen. To the extent that you think that “But the sacrifice is too great” is a relevant argument, you think that “Yeah doing Y is right” is potentially mistaken.
I guess the motivation for this post is in asking whether it is actually possible for a conclusion like that to be correct. I expect it might be, mainly because humans are not particularly optimized thingies, so it might be more valuable to use the atoms to make something else that’s not significantly related to the individual humans. But again to emphasize the consequentialist issue: to the extent such judgment is correct, it’s incorrect to oppose it; and to the extent it’s correct to oppose it, the judgment is incorrect.
I think I disagree with this. On a social and political level, the tendency to rationalize is so pervasive it would sound completely absurd to say “I agree that it would be morally correct to implement your policy but I advocate not doing it, because it will only help future generations, screw those guys.” In practice, when people attempt to motivate each other in the political sphere to do something, it is always accompanied by the claim that doing that thing is morally right. But it is in principle possible to try to get people not to do something by arguing “hey this is really bad for us!” without arguing against it’s moral rightness. This thought experiment is a case where this exact “lets grab the banana” position is supposed to be tempting.
People aren’t motivated by morality alone—people aren’t required to do what they recognize to be morally correct.
e.g. a parent may choose their kid’s life over the lives of a hundred other children. Because they care more about their own child—not because they think it’s the morally correct thing to do.
Our moral sense is only one of the many things that motivate us.
I’m talking about extrapolated morality, which is not the same thing as moral sense (i.e. judgments accessible on human level without doing much more computation). This extrapolated morality determines what should motivate you, but of course it’s not what does motivate you, and neither is non-extrapolated moral sense. In this sense it’s incorrect to oppose extrapolated morality (you shouldn’t do it), but you are in actuality motivated by other things, so you’ll probably act incorrectly (in this sense).
Could you please point me in the direction of some discussion about ‘extrapolated morality’ (unless you mean CEV, in which case there’s no need)?
CEV for individuals is vaguely analogous to what I’m referring to, but I don’t know in any detail what I mean.
It’s basically CEV for individuals, yeah.
In what sense ‘should’ individuals be motivated by their CEV rather than by their non-CEV preferences? Wouldn’t breaking down the word ‘should’ in that previous sentence give you “Individuals want to achieve a state whereby they want to achieve what a perfect version of themselves would want to achieve rather than what they want to achieve”? Isn’t that vaguely self-defeating?
It’s more a useful definition of “should” than advice using a preexisting meaning for “should”.
Congratulations, you have successfully answered the title!
Now, on to the actual post …