I agree with this analysis provided there is some reason for linear aggregation.
Why should the utility of the world be the sum of the utilities of its inhabitants? Why not, for instance, the min of the utilities of its inhabitants?
I think that’s what my intuition wants to do anyway: care about how badly off the worst-off person is, and try to improve that.
U1(world) = min_people(u(person)) instead of U2(world) = sum_people(u(person))
Min is a really bad metric—it means that, for example, my decision of whether to torture someone or not doesn’t matter as long as someone out there is also getting tortured. So it doesn’t actually lead to an answer of the dust speck problem. And if you limit it to the min of people involved, it leads to things like… “then it’s better to break 1 billion people’s non-dominant arms than one person’s dominant arm” which in my opinion is absurd.
I think that’s what my intuition wants to do anyway: care about how badly off the worst-off person is, and try to improve that.
I find it hard to believe that you believe that. Under that metric, for example, “pick a thousand happy people and kill their dogs” is a completely neutral act, along with lots of other extremely strange results.
So then, we disregard everyone who isn’t affected by the possible action and maximize over the utilities of those who are.
But still, this prefers a million people being punched once to any one person being punched twice, which seems silly—I’m just trying to parse out my intuition for choosing dust specks.
I get other possible methods being flawed is a mark for linear aggregation, but what positive reasons are there for it?
Or, for a maybe more dramatic instance: “Find the world’s unhappiest person and kill them”. Of course total utilitarianism might also endorse doing that (as might quite a lot of people, horrible though it sounds, on considering just how wretched the lives of the world’s unhappiest people probably are) -- but min-utilitarianism continues to endorse doing this even if everyone in the world—including the soon-to-be-ex-unhappiest-person—is extremely happy and very much wishes to go on living.
The specific problem which causes that is that most versions of utilitarianism don’t allow the fact that someone desires not to be killed to affect the utility calculation, since after they have been killed, they no longer have utility.
Yes, this is a failure mode of (some forms of?) utilitarianism, but not the specific weirdness I was trying to get at, which was that if you aggregate by min(), then it’s completely morally OK to do very bad things to huge numbers of people—in fact, it’s no worse than radically improving huge numbers of lives—as long as you avoid affecting the one person who is worst-off. This is a very silly property for a moral system to have.
You can attempt to mitigate this property with too-clever objections, like “aha, but if you kill a happy person, then in the moment of their death they are temporarily the most unhappy person, so you have affected the metric after all”. I don’t think that actually works, but didn’t want it to obscure the point, so I picked “kill their dog” as an example, because it’s a clearly bad thing which definitely doesn’t bump anyone to the bottom.
I agree with this analysis provided there is some reason for linear aggregation.
Why should the utility of the world be the sum of the utilities of its inhabitants? Why not, for instance, the
min
of the utilities of its inhabitants?I think that’s what my intuition wants to do anyway: care about how badly off the worst-off person is, and try to improve that.
U1(world) = min_people(u(person)) instead of U2(world) = sum_people(u(person))
so U1(torture) = -big, U1(dust) = -tiny
U2(torture) = -big, U2(dust) = -outrageously massive
Thus, if you use U1, you choose dust because -tiny > -big,
but if you use U2, you choose torture because -big > -outrage.
But I see no real reason to prefer one intuition over the other, so my question is this:
Why linear aggregation of utilities?
Min is a really bad metric—it means that, for example, my decision of whether to torture someone or not doesn’t matter as long as someone out there is also getting tortured. So it doesn’t actually lead to an answer of the dust speck problem. And if you limit it to the min of people involved, it leads to things like… “then it’s better to break 1 billion people’s non-dominant arms than one person’s dominant arm” which in my opinion is absurd.
I find it hard to believe that you believe that. Under that metric, for example, “pick a thousand happy people and kill their dogs” is a completely neutral act, along with lots of other extremely strange results.
Oh, good point, maybe a kind of alphabetical ordering could break ties.
So then, we disregard everyone who isn’t affected by the possible action and maximize over the utilities of those who are.
But still, this prefers a million people being punched once to any one person being punched twice, which seems silly—I’m just trying to parse out my intuition for choosing dust specks.
I get other possible methods being flawed is a mark for linear aggregation, but what positive reasons are there for it?
Or, for a maybe more dramatic instance: “Find the world’s unhappiest person and kill them”. Of course total utilitarianism might also endorse doing that (as might quite a lot of people, horrible though it sounds, on considering just how wretched the lives of the world’s unhappiest people probably are) -- but min-utilitarianism continues to endorse doing this even if everyone in the world—including the soon-to-be-ex-unhappiest-person—is extremely happy and very much wishes to go on living.
The specific problem which causes that is that most versions of utilitarianism don’t allow the fact that someone desires not to be killed to affect the utility calculation, since after they have been killed, they no longer have utility.
Yes, this is a failure mode of (some forms of?) utilitarianism, but not the specific weirdness I was trying to get at, which was that if you aggregate by min(), then it’s completely morally OK to do very bad things to huge numbers of people—in fact, it’s no worse than radically improving huge numbers of lives—as long as you avoid affecting the one person who is worst-off. This is a very silly property for a moral system to have.
You can attempt to mitigate this property with too-clever objections, like “aha, but if you kill a happy person, then in the moment of their death they are temporarily the most unhappy person, so you have affected the metric after all”. I don’t think that actually works, but didn’t want it to obscure the point, so I picked “kill their dog” as an example, because it’s a clearly bad thing which definitely doesn’t bump anyone to the bottom.