I’ve decided I really don’t like a lot of ethics thought-experiments. Making people contemplate a horrible but artificially constrained scenario gets them to react strongly, but the constraints placed on the thought-experiment block us from using it to reason about almost everything that actually motivates that strong reaction.
Part of a real-world reason not to push someone in front of a runaway trolley to stop it is that it might not work. The person might fight back; the trolley’s brakes might not actually be failing; the person might not be heavy enough to stop the trolley. But the trolley problem requires that we set these practical considerations aside and consider exactly two world-branches: kill the one person, or kill the five.
Another part of a real-world reason not to push someone in front of a runaway trolley is that other people might do bad things to you and your loved ones because you did it. You might go to jail, lose your job, be unable to provide for your kids, be dumped by your spouse or partner. If you’re a doctor who saves lives every week, it would be pretty evil for you to throw your career away for the chance of saving a few people on a trolley track. If you’re working on existential risks, your getting put in jail might literally cost us the world. But the trolley problem doesn’t ask us to think about those consequences, just the immediate short-term ones: kill the one person, or kill the five.
In other words, the trolley problem doesn’t ask us to exercise our full wisdom to solve a tricky situation. It asks us to be much stupider than we are in ordinary life; to blank out most of the likely consequences; to ignore many of the strongest and most worthwhile reasons that we might have to do (or refrain from doing) something. The only way the suggestion to reduce moral decision-making to “5 deaths > 1 death” can sound even remotely reasonable is to turn off most of your brain.
(I should add: My friend with a master’s degree in philosophy thinks I’m totally missing the point of the distinction between ethical philosophy and applied morality.)
I’ve decided I really don’t like a lot of ethics thought-experiments. Making people contemplate a horrible but artificially constrained scenario gets them to react strongly, but the constraints placed on the thought-experiment block us from using it to reason about almost everything that actually motivates that strong reaction.
Part of a real-world reason not to push someone in front of a runaway trolley to stop it is that it might not work. The person might fight back; the trolley’s brakes might not actually be failing; the person might not be heavy enough to stop the trolley. But the trolley problem requires that we set these practical considerations aside and consider exactly two world-branches: kill the one person, or kill the five.
Another part of a real-world reason not to push someone in front of a runaway trolley is that other people might do bad things to you and your loved ones because you did it. You might go to jail, lose your job, be unable to provide for your kids, be dumped by your spouse or partner. If you’re a doctor who saves lives every week, it would be pretty evil for you to throw your career away for the chance of saving a few people on a trolley track. If you’re working on existential risks, your getting put in jail might literally cost us the world. But the trolley problem doesn’t ask us to think about those consequences, just the immediate short-term ones: kill the one person, or kill the five.
In other words, the trolley problem doesn’t ask us to exercise our full wisdom to solve a tricky situation. It asks us to be much stupider than we are in ordinary life; to blank out most of the likely consequences; to ignore many of the strongest and most worthwhile reasons that we might have to do (or refrain from doing) something. The only way the suggestion to reduce moral decision-making to “5 deaths > 1 death” can sound even remotely reasonable is to turn off most of your brain.
(I should add: My friend with a master’s degree in philosophy thinks I’m totally missing the point of the distinction between ethical philosophy and applied morality.)