No, they really don’t. Most significantly because most people don’t behave as consequentialists of any kind.
Most people don’t consistently behave as consequentialists, but they do make consequentialist decisions some of the time, particularly in cases like this one.
Consider a less extreme example. Suppose your friend Xerxes is obsessed with Beethoven—he listens to every known composition and tries to learn it, and derives great enjoyment from doing so. Your friend Ygnacio also likes classical music in general but has no specific fondness for Beethoven. While digging in your belongings, you discover a sheet of antique sheet music personally written by Beethoven. Coincidentally, Xerxes’s and Ygnacio’s birthdays are coming up, and this would make a good gift for either of them—but as there’s only one sheet of music, only one of them can receive it. Certainly, Ygnacio would appreciate it, but Xerxes would like it much more. In such a situation, most people would give the sheet music to Xerxes, because he would enjoy it more.
As for the utility monster, that’s a nonsequitur in this context, because we’re not talking about true (agent-neutral) utilitarianism, only about utility maximization, which is not the same thing.
Even if we ignore the type error of comparing XerxesValue and YgnacioValue
We’re not comparing XerxesValue and YgnacioValue, we’re comparing HowMuchYouCareAboutXerxes x XerxesValue and HowMuchYouCareAboutYgnacio x YgnacioValue, which does not produce a type error.
your decision ‘should’ take into account other information including things like who you gave the strawberry tarts to ten minutes ago and assorted other social transactions
If you gave the strawberry tarts to someone ten minutes ago, it is reasonable to assume that because of diminishing marginal utility, they won’t value sweets as highly as they did before. But if you have reason to believe that they don’t experience diminishing marginal utility, or that their diminished derived utility would still be greater than the utility derived by an alternative person, then you should give it to the person who would derive greater utility (assuming you value them equally).
It’s true that people don’t always give all favors to the most enthusiastic person, but that is justified because it’s reasonable to assume that enthusiasm isn’t always a reliable indication of derived value.
(Had to edit this a million times, markup hates me.)
But if you have reason to believe that they don’t experience diminishing marginal utility, or that their diminished derived utility would still be greater than the utility derived by an alternative person, then you should give it to the person who would derive greater utility (assuming you value them equally).
How do you think caring about having more allies than one affects this situation?
Most people don’t consistently behave as consequentialists, but they do make consequentialist decisions some of the time, particularly in cases like this one. Consider a less extreme example. Suppose your friend Xerxes is obsessed with Beethoven—he listens to every known composition and tries to learn it, and derives great enjoyment from doing so. Your friend Ygnacio also likes classical music in general but has no specific fondness for Beethoven. While digging in your belongings, you discover a sheet of antique sheet music personally written by Beethoven. Coincidentally, Xerxes’s and Ygnacio’s birthdays are coming up, and this would make a good gift for either of them—but as there’s only one sheet of music, only one of them can receive it. Certainly, Ygnacio would appreciate it, but Xerxes would like it much more. In such a situation, most people would give the sheet music to Xerxes, because he would enjoy it more. As for the utility monster, that’s a nonsequitur in this context, because we’re not talking about true (agent-neutral) utilitarianism, only about utility maximization, which is not the same thing.
We’re not comparing XerxesValue and YgnacioValue, we’re comparing HowMuchYouCareAboutXerxes x XerxesValue and HowMuchYouCareAboutYgnacio x YgnacioValue, which does not produce a type error.
If you gave the strawberry tarts to someone ten minutes ago, it is reasonable to assume that because of diminishing marginal utility, they won’t value sweets as highly as they did before. But if you have reason to believe that they don’t experience diminishing marginal utility, or that their diminished derived utility would still be greater than the utility derived by an alternative person, then you should give it to the person who would derive greater utility (assuming you value them equally). It’s true that people don’t always give all favors to the most enthusiastic person, but that is justified because it’s reasonable to assume that enthusiasm isn’t always a reliable indication of derived value.
(Had to edit this a million times, markup hates me.)
How do you think caring about having more allies than one affects this situation?
If that’s a term in your utility function, then you should consider it. Here, I’m assuming there aren’t any other effects.