But people do make a sort-of utilitarian calculation about what to give to whom.
No, they really don’t. Most significantly because most people don’t behave as consequentialists of any kind. Also, and this time fortunately, our egalitarian instincts are sufficient to suppress that kind of folly.
If your friend Xerxes says he values chocolate cake at OVER 9000!!! and your friend Ygnacio says he values it at 1, you care about both of them roughly equally, and you assume that they’re stating their preferences honestly, you should give the cake to Xerxes.
No, you shouldn’t. Even if we ignore the type error of comparing Xerxes_value and Ygnacio_value and your decision ‘should’ take into account other information including things like who you gave the strawberry tarts to ten minutes ago and assorted other social transactions. I have not met a single human who gives all his favours to the same person because they are the most enthusiastic (and would consider the resultant behaviour to be repugnant and a behavioural red flag).
But then there’s an incentive to exaggerate one’s preferences.
Yes, as the grandparent observed, when the people being spoken to are both gullible and utilitarian and the speakers are neither ethical nor utilitarian ‘Tell’ does devolve into ‘Ask’. However if the listener is either not-gullible or not a crude total utilitarian then exaggerating your preferences amounts to crippling your own ability to receive value via either trade or gifts towards the limit of being only able to communicate booleans. (ie. It’s purely destructive self-sabotage.)
No, they really don’t. Most significantly because most people don’t behave as consequentialists of any kind.
Most people don’t consistently behave as consequentialists, but they do make consequentialist decisions some of the time, particularly in cases like this one.
Consider a less extreme example. Suppose your friend Xerxes is obsessed with Beethoven—he listens to every known composition and tries to learn it, and derives great enjoyment from doing so. Your friend Ygnacio also likes classical music in general but has no specific fondness for Beethoven. While digging in your belongings, you discover a sheet of antique sheet music personally written by Beethoven. Coincidentally, Xerxes’s and Ygnacio’s birthdays are coming up, and this would make a good gift for either of them—but as there’s only one sheet of music, only one of them can receive it. Certainly, Ygnacio would appreciate it, but Xerxes would like it much more. In such a situation, most people would give the sheet music to Xerxes, because he would enjoy it more.
As for the utility monster, that’s a nonsequitur in this context, because we’re not talking about true (agent-neutral) utilitarianism, only about utility maximization, which is not the same thing.
Even if we ignore the type error of comparing XerxesValue and YgnacioValue
We’re not comparing XerxesValue and YgnacioValue, we’re comparing HowMuchYouCareAboutXerxes x XerxesValue and HowMuchYouCareAboutYgnacio x YgnacioValue, which does not produce a type error.
your decision ‘should’ take into account other information including things like who you gave the strawberry tarts to ten minutes ago and assorted other social transactions
If you gave the strawberry tarts to someone ten minutes ago, it is reasonable to assume that because of diminishing marginal utility, they won’t value sweets as highly as they did before. But if you have reason to believe that they don’t experience diminishing marginal utility, or that their diminished derived utility would still be greater than the utility derived by an alternative person, then you should give it to the person who would derive greater utility (assuming you value them equally).
It’s true that people don’t always give all favors to the most enthusiastic person, but that is justified because it’s reasonable to assume that enthusiasm isn’t always a reliable indication of derived value.
(Had to edit this a million times, markup hates me.)
But if you have reason to believe that they don’t experience diminishing marginal utility, or that their diminished derived utility would still be greater than the utility derived by an alternative person, then you should give it to the person who would derive greater utility (assuming you value them equally).
How do you think caring about having more allies than one affects this situation?
No, they really don’t. Most significantly because most people don’t behave as consequentialists of any kind. Also, and this time fortunately, our egalitarian instincts are sufficient to suppress that kind of folly.
No, you shouldn’t. Even if we ignore the type error of comparing Xerxes_value and Ygnacio_value and your decision ‘should’ take into account other information including things like who you gave the strawberry tarts to ten minutes ago and assorted other social transactions. I have not met a single human who gives all his favours to the same person because they are the most enthusiastic (and would consider the resultant behaviour to be repugnant and a behavioural red flag).
Yes, as the grandparent observed, when the people being spoken to are both gullible and utilitarian and the speakers are neither ethical nor utilitarian ‘Tell’ does devolve into ‘Ask’. However if the listener is either not-gullible or not a crude total utilitarian then exaggerating your preferences amounts to crippling your own ability to receive value via either trade or gifts towards the limit of being only able to communicate booleans. (ie. It’s purely destructive self-sabotage.)
Most people don’t consistently behave as consequentialists, but they do make consequentialist decisions some of the time, particularly in cases like this one. Consider a less extreme example. Suppose your friend Xerxes is obsessed with Beethoven—he listens to every known composition and tries to learn it, and derives great enjoyment from doing so. Your friend Ygnacio also likes classical music in general but has no specific fondness for Beethoven. While digging in your belongings, you discover a sheet of antique sheet music personally written by Beethoven. Coincidentally, Xerxes’s and Ygnacio’s birthdays are coming up, and this would make a good gift for either of them—but as there’s only one sheet of music, only one of them can receive it. Certainly, Ygnacio would appreciate it, but Xerxes would like it much more. In such a situation, most people would give the sheet music to Xerxes, because he would enjoy it more. As for the utility monster, that’s a nonsequitur in this context, because we’re not talking about true (agent-neutral) utilitarianism, only about utility maximization, which is not the same thing.
We’re not comparing XerxesValue and YgnacioValue, we’re comparing HowMuchYouCareAboutXerxes x XerxesValue and HowMuchYouCareAboutYgnacio x YgnacioValue, which does not produce a type error.
If you gave the strawberry tarts to someone ten minutes ago, it is reasonable to assume that because of diminishing marginal utility, they won’t value sweets as highly as they did before. But if you have reason to believe that they don’t experience diminishing marginal utility, or that their diminished derived utility would still be greater than the utility derived by an alternative person, then you should give it to the person who would derive greater utility (assuming you value them equally). It’s true that people don’t always give all favors to the most enthusiastic person, but that is justified because it’s reasonable to assume that enthusiasm isn’t always a reliable indication of derived value.
(Had to edit this a million times, markup hates me.)
How do you think caring about having more allies than one affects this situation?
If that’s a term in your utility function, then you should consider it. Here, I’m assuming there aren’t any other effects.