I’ve seen “utilitarianism” used to denote both “my utility is the average/[normalized sum] of the utility of each person, plus my exclusive preferences” and “my utility is a weighted sum/average of the utility of a bunch of entities, plus my exclusive preferences”. I’m almost sure that few LWers would claim to be utilitarians in the former sense, especially since most people round here believe minds are made of atoms and thus not very discrete.
I mean, we can add/remove small bits from minds, and unless personhood is continuous (which would imply the second sense of utilitarianism), one tiny change in the mind would have to suddenly shift us from fully caring about a mind to not caring about it at all, which doesn’t seem to be what humans do. This is an instance of the Sorites “paradox”.
(One might argue that utilities are only defined up to affine transformation, but when I say “utility” I mean the thing that’s like utility except it’s comparable between agents. Now that I think about it, you might mean that we’ve defined persons’ utility such that every util is equal in the second sense of the previous sentence, but I don’t think you meant that.)
Utilitarianism is normative, so it means that your utility should be the average of the utility of all beings capable of experiencing it, regardless of whether your utility currently is that. If it becomes a weighted average, it ceases to be utilitarianism, because it involves considerations other than the maximization of utility.
one tiny change in the mind would have to suddenly shift us from fully caring about a mind to not caring about it at all, which doesn’t seem to be what humans do
Consider how much people care about the living compared to the dead. I think that’s a counterexample to your claim.
I’ve seen “utilitarianism” used to denote both “my utility is the average/[normalized sum] of the utility of each person, plus my exclusive preferences” and “my utility is a weighted sum/average of the utility of a bunch of entities, plus my exclusive preferences”. I’m almost sure that few LWers would claim to be utilitarians in the former sense, especially since most people round here believe minds are made of atoms and thus not very discrete.
I mean, we can add/remove small bits from minds, and unless personhood is continuous (which would imply the second sense of utilitarianism), one tiny change in the mind would have to suddenly shift us from fully caring about a mind to not caring about it at all, which doesn’t seem to be what humans do. This is an instance of the Sorites “paradox”.
(One might argue that utilities are only defined up to affine transformation, but when I say “utility” I mean the thing that’s like utility except it’s comparable between agents. Now that I think about it, you might mean that we’ve defined persons’ utility such that every util is equal in the second sense of the previous sentence, but I don’t think you meant that.)
Utilitarianism is normative, so it means that your utility should be the average of the utility of all beings capable of experiencing it, regardless of whether your utility currently is that. If it becomes a weighted average, it ceases to be utilitarianism, because it involves considerations other than the maximization of utility.
Consider how much people care about the living compared to the dead. I think that’s a counterexample to your claim.