Interesting. This implies that there are actually two ways of interpreting such moral dilemmas: either as A) “what would you actually do in this situation”, or B) “what would be the right thing to do in this situation, regardless of whether you’d actually be capable of doing it”.
I’ve always interpreted the questions as being of type B, but the way you write suggests you’re thinking of them as being type A. I wonder how much of the disagreement relating to these questions is caused by differing interpretations.
It’s more complicated than that. Most people would say that there are imaginable situations where a certain course of action is right, but they’d be strongly tempted to act differently out of base motives. For example, if you ask a typical person whether it would be right to gain a large amount of money by some sort of cheating, assuming you know for sure there won’t be any negative consequences, they’ll immediately understand that the question is about what’s normatively right, not how they’d be tempted to act. Some very sincere people would probably admit that they might yield to the temptation, even though they consider it wrong.
Now, imagine you’re introduced to someone who had the opportunity to cheat a business partner for a million dollars with zero risk of repercussions, but flat-out refused to do so out of sheer moral fiber. You’ll immediately perceive this person as trustworthy and desirable to deal with—a man who acts according to high principles, not base passion and instinct. In contrast, you’d shun and despise him if you heard he’d acted otherwise.
However, let’s now compare that with the extreme fat man problem (where you’d have to cut the fat man’s throat to avert some greater loss of life). Imagine you’re introduced to someone who was faced with it and who slit the fat man’s throat without blinking. Would you feel warm and fuzzy about this person? Would any of the bullet-biting utilitarians fail to be profoundly creeped out just by the knowledge that they are standing next to someone who actually acted like that—even though they’d all defend (nay, prescribe!) his course of action relentlessly when philosophizing? Moreover, I would again bet dollars to donuts that our genteel utilitarians would be much less creeped out by someone who couldn’t bring himself to butcher the fat man.
When I think about this, I honestly can’t but detect severe short-sightedness in moral bullet-biters.
Imagine you’re introduced to someone who was faced with it and who slit the fat man’s throat without blinking. Would you feel warm and fuzzy about this person?
I’m not sure “warm and fuzzy” is the right term, but … I would feel a certain respect, and of course update my probability that they will fail to take the correct action out of bias or akrasia. And my probability that they will kill me.
Would you be creeped out by someone who cheerfully admitted they would kill you if you turned evil? I mean mind-control type evil? Because in fiction at least that’s treated as a good thing, but still creepy.
(I think the creepiness is the fact that they can and will kill people, and there’s the ever-present worry they might mistake you for a risk.)
I can believe that a neurotypical person would be more likely to imagine themselves doing the actual killing, while someone on the AS would be more likely to stay with the abstract problem.
Interesting. This implies that there are actually two ways of interpreting such moral dilemmas: either as A) “what would you actually do in this situation”, or B) “what would be the right thing to do in this situation, regardless of whether you’d actually be capable of doing it”.
I’ve always interpreted the questions as being of type B, but the way you write suggests you’re thinking of them as being type A. I wonder how much of the disagreement relating to these questions is caused by differing interpretations.
It’s more complicated than that. Most people would say that there are imaginable situations where a certain course of action is right, but they’d be strongly tempted to act differently out of base motives. For example, if you ask a typical person whether it would be right to gain a large amount of money by some sort of cheating, assuming you know for sure there won’t be any negative consequences, they’ll immediately understand that the question is about what’s normatively right, not how they’d be tempted to act. Some very sincere people would probably admit that they might yield to the temptation, even though they consider it wrong.
Now, imagine you’re introduced to someone who had the opportunity to cheat a business partner for a million dollars with zero risk of repercussions, but flat-out refused to do so out of sheer moral fiber. You’ll immediately perceive this person as trustworthy and desirable to deal with—a man who acts according to high principles, not base passion and instinct. In contrast, you’d shun and despise him if you heard he’d acted otherwise.
However, let’s now compare that with the extreme fat man problem (where you’d have to cut the fat man’s throat to avert some greater loss of life). Imagine you’re introduced to someone who was faced with it and who slit the fat man’s throat without blinking. Would you feel warm and fuzzy about this person? Would any of the bullet-biting utilitarians fail to be profoundly creeped out just by the knowledge that they are standing next to someone who actually acted like that—even though they’d all defend (nay, prescribe!) his course of action relentlessly when philosophizing? Moreover, I would again bet dollars to donuts that our genteel utilitarians would be much less creeped out by someone who couldn’t bring himself to butcher the fat man.
When I think about this, I honestly can’t but detect severe short-sightedness in moral bullet-biters.
I’m not sure “warm and fuzzy” is the right term, but … I would feel a certain respect, and of course update my probability that they will fail to take the correct action out of bias or akrasia. And my probability that they will kill me.
Would you be creeped out by someone who cheerfully admitted they would kill you if you turned evil? I mean mind-control type evil? Because in fiction at least that’s treated as a good thing, but still creepy.
(I think the creepiness is the fact that they can and will kill people, and there’s the ever-present worry they might mistake you for a risk.)
I can believe that a neurotypical person would be more likely to imagine themselves doing the actual killing, while someone on the AS would be more likely to stay with the abstract problem.