I agree that the claims are doing all of the work and that this is not a convincing argument for utilitarianism. I often hear arguments for moral philosophies that make a ton of implicit assumptions. I think that once you make them explicit and actually try to be rigorous the argument always seems less impressive, and less convincing.
I think a key principle involves selecting the right set of ought claims as assumptions. Some are more convincing than others. E.g. I believe “The fairness of an outcome ought to be irrelevant (this is probably the most interesting and contentious assumption).” can be replaced with something like “Frequencies and stochasticities are interchangable; X% chance of affecting everyone’s utility is equivalent to 100% chance of affect X% of people’s utility”.
This is a much more agreeable assumption. When I get a chance, I’ll make sure it can replace the fairness one and add it to the proof and give you credit.
It implies that there exists some such function. It does not imply there exists a single unique function. And indeed the resulting function is not unique.
If I have two choices A and B, and I rank A>B, u(PA)=10PA might be one valid function (Effective value of 10 for A and 0 for B). But u(PA)=2PA+1 might be another (Effective value of 3 for A and 1 for B.)
Since for any two VNM-agents X and Y, their VNM-utility functions uX and uY are only determined up to additive constants and multiplicative positive scalars, the theorem does not provide any canonical way to compare the two. Hence expressions like uX(L) + uY(L) and uX(L) − uY(L) are not canonically defined, nor are comparisons like uX(L) < uY(L) canonically true or false. In particular, the aforementioned “total VNM-utility” and “average VNM-utility” of a population are not canonically meaningful without normalization assumptions[1].
This, unfortunately, rather undermines the rest of your argument.
I don’t think I agree that this undermines my argument. I showed that the utility function of person 1 is of the form h(x + y) where h is monotonic increasing. This respects the fact that the utility function is not unique. 2(x + y) + 1 would qualify, as would 3 log(x + y), etc.
Showing that the utility function must have this form is enough to prove total utilitarianism in this case since when you compare h(x + y) to h(x’+ y’), h becomes irrelevant. It is the same as comparing x + y to x’ + y’.
I have three agents AB and C, each with the following preferences between two outcomes a and b:
Agents A and B prefers a>b
Agent C prefers b>a
For any two lottos <L, with an x% chance of getting a, otherwise b> and <M, with an y% chance of getting a, otherwise b>:
if X>Y
A and B prefer L
C prefers M.
If X=Y, all three agents are indifferent between L and M
if X<Y:
A and B prefer M
C prefers L.
(2 is redundant given 1, but I figured it was best to spell it out.)
This satisfies the axioms of the VNM theorem.
I’ll give you a freebee here: I am declaring that agent C‘s utility function is: uC(Pa)=−2Pa as part of the problem. This is compatible with the definition of agent C’s preferences, above.
As for agents A and B, I’ll give you less of a freebee: I am declaring as part of the problem that one of the two agents, agent [redacted alpha] has the following utility function: u[Redacted Alpha](Pa)=3Pa. This is compatible with the definition of agent [redacted alpha]‘s preferences, above. I am declaring as part of the problem that the other of the two agents, agent [redacted beta] has the following utility function: : u[Redacted Beta](Pa)=Pa. This is compatible with the definition of agent [redacted beta]’s preferences, above.
Now, consider the following scenarios:
Agent [redacted alpha] and agent C are choosing between a and b:
The resulting utility function is u[Redacted Alpha](Pa)+uC(Pa)=3Pa−2Pa=Pa
The resulting optimal outcome is outcome a.
Agent [redacted beta] and agent C are choosing between a and b:
The resulting utility function is u[Redacted Beta](Pa)+uC(Pa)=Pa−2Pa=−Pa
I doubt all of your ought claims.
I doubt all of the claims, including the “is” claim.
Me too. The claims are doing all the work, while the argument is a triviality.
I agree that the claims are doing all of the work and that this is not a convincing argument for utilitarianism. I often hear arguments for moral philosophies that make a ton of implicit assumptions. I think that once you make them explicit and actually try to be rigorous the argument always seems less impressive, and less convincing.
I think a key principle involves selecting the right set of ought claims as assumptions. Some are more convincing than others. E.g. I believe “The fairness of an outcome ought to be irrelevant (this is probably the most interesting and contentious assumption).” can be replaced with something like “Frequencies and stochasticities are interchangable; X% chance of affecting everyone’s utility is equivalent to 100% chance of affect X% of people’s utility”.
This is a much more agreeable assumption. When I get a chance, I’ll make sure it can replace the fairness one and add it to the proof and give you credit.
Another issue:
It implies that there exists some such function. It does not imply there exists a single unique function. And indeed the resulting function is not unique.
If I have two choices A and B, and I rank A>B, u(PA)=10PA might be one valid function (Effective value of 10 for A and 0 for B). But u(PA)=2PA+1 might be another (Effective value of 3 for A and 1 for B.)
This, unfortunately, rather undermines the rest of your argument.
https://en.wikipedia.org/wiki/Von_Neumann%E2%80%93Morgenstern_utility_theorem#Incomparability_between_agents
I don’t think I agree that this undermines my argument. I showed that the utility function of person 1 is of the form h(x + y) where h is monotonic increasing. This respects the fact that the utility function is not unique. 2(x + y) + 1 would qualify, as would 3 log(x + y), etc.
Showing that the utility function must have this form is enough to prove total utilitarianism in this case since when you compare h(x + y) to h(x’+ y’), h becomes irrelevant. It is the same as comparing x + y to x’ + y’.
I have three agents A B and C, each with the following preferences between two outcomes a and b:
Agents A and B prefers a>b
Agent C prefers b>a
For any two lottos <L, with an x% chance of getting a, otherwise b> and <M, with an y% chance of getting a, otherwise b>:
if X>Y
A and B prefer L
C prefers M.
If X=Y, all three agents are indifferent between L and M
if X<Y:
A and B prefer M
C prefers L.
(2 is redundant given 1, but I figured it was best to spell it out.)
This satisfies the axioms of the VNM theorem.
I’ll give you a freebee here: I am declaring that agent C‘s utility function is: uC(Pa)=−2Pa as part of the problem. This is compatible with the definition of agent C’s preferences, above.
As for agents A and B, I’ll give you less of a freebee:
I am declaring as part of the problem that one of the two agents, agent [redacted alpha] has the following utility function: u[Redacted Alpha](Pa)=3Pa. This is compatible with the definition of agent [redacted alpha]‘s preferences, above.
I am declaring as part of the problem that the other of the two agents, agent [redacted beta] has the following utility function: : u[Redacted Beta](Pa)=Pa. This is compatible with the definition of agent [redacted beta]’s preferences, above.
Now, consider the following scenarios:
Agent [redacted alpha] and agent C are choosing between a and b:
The resulting utility function is u[Redacted Alpha](Pa)+uC(Pa)=3Pa−2Pa=Pa
The resulting optimal outcome is outcome a.
Agent [redacted beta] and agent C are choosing between a and b:
The resulting utility function is u[Redacted Beta](Pa)+uC(Pa)=Pa−2Pa=−Pa
The resulting optimal outcome is outcome b.
Agent A and agent C are choosing between a and b:
Is this the same as scenario 1? Or scenario 2?
Agent B and agent C are choosing between a and b:
Is this the same as scenario 1? Or scenario 2?
Please tell me the optimal outcome for 3 and 4.
This assumes that the act of evaluating a utility function has no utility cost.
I do not agree with this (implicit) assumption.
Good point, I overlooked this.