If you assume 3, average utilitarianism says you should kill anyone who has below average utility, since that raises the average. So in the end you kill everyone except the one person who has the highest utility. There is no need for assumption 2 at all.
If you assume 3, average utilitarianism says you should kill anyone who has below average utility, since that raises the average. So in the end you kill everyone except the one person who has the highest utility.
You really need to also assume killing people doesn’t decrease utility for those who are left, which usually doesn’t work too well for humans...
It is possible to kill someone without invoking any negative experiences.
Average utilitarianism requires more, it requires that it is possible to have a policy of systematically killing most people that does not result in negative experiences. This does not seem meaningfully possible for any agents that are vaguely human, so this is a straw man objection to average utilitarianism, and a pretty bad one at that.
average utilitarianism says you should kill anyone who has below average utility
This assumes that killing people with low utility has absolutely no effect on the utility of anyone else, and that living in a world where you will be killed if you’re not happy enough has no negative effect on your happiness. This is, to put it mildly, completely and totally false without very radically altering the human mind.
On a related note, the beatings will continue until morale improves.
The base idea seems identical to the quantum suicide scenarios. Although I did not know about them, my only contribution is to put up a convincing concrete scenario, where suicide offers a benefit to each players.
If you assume 3, average utilitarianism says you should kill anyone who has below average utility, since that raises the average. So in the end you kill everyone except the one person who has the highest utility. There is no need for assumption 2 at all.
BTW, are you aware of any of the previous literature and discussion on quantum suicide and immortality?
See also http://www.acceleratingfuture.com/steven/?p=215
You really need to also assume killing people doesn’t decrease utility for those who are left, which usually doesn’t work too well for humans...
Which is why we do not really believe in average utilitarianism…
Average utilitarianism requires more, it requires that it is possible to have a policy of systematically killing most people that does not result in negative experiences. This does not seem meaningfully possible for any agents that are vaguely human, so this is a straw man objection to average utilitarianism, and a pretty bad one at that.
No. If you assign a large negative utility total death. That is me dieing unconditionally has a large negative value to me.
If I assume that the rate of my unconditional death does not change significantly after the experiment, then it could make sense to play the roulette.
This assumes that killing people with low utility has absolutely no effect on the utility of anyone else, and that living in a world where you will be killed if you’re not happy enough has no negative effect on your happiness. This is, to put it mildly, completely and totally false without very radically altering the human mind.
On a related note, the beatings will continue until morale improves.
Thanks for the links. They look interesting.
The base idea seems identical to the quantum suicide scenarios. Although I did not know about them, my only contribution is to put up a convincing concrete scenario, where suicide offers a benefit to each players.