Taken literally, maximising (economic) utility means wanting the sum of all people’s utility functions to be as high as possible. But by doing so, we are double-counting! Let’s say I assign utility U to living a happy life, and utility U+1 to my wife living a happy life; my wife does the converse. If we both have happy lives, we have total utility 4U+2, which means that our lives should be prioritised over the lives of four other people who value their own lives just as highly, but don’t care much about other people! This is bizarre, and gets more so when we consider that people might have many strong relationships. By this calculation method, a family of five people who all value each other more than themselves have more total utility than 25 equally happy loners.
This is incorrect, and the mistake is a critical flaw in your reasoning; but, ironically, it is also incorrect in a second, totally different way, which makes your reasoning come out right in the end (but only by coincidence).
Mistake One: Decision-theoretic utility attaches to outcomes, and ‘outcomes’ here must be understood to be world-states—not component of world-states. In other words, your example would properly go like this:
Say you assign utility U to the outcome “I live a happy life, and so does my wife”; and your wife assigns utility U to the outcome “I live a happy life, and so does my husband”. Adding these gives us 2U; no double-counting occurs, nor is it ever possible for double-counting to occur.
However, celebrations are premature, because then there is…
Mistake Two: As per the Von Neumann–Morgenstern utility theorem (which tells us that the preferences of any agent from which a utility function may be constructed, a.k.a. any agent that “has” a utility function, must comply with the VNM axioms), an agent’s utility function is defined only up to positive affine transformation. This makes interpersonal utility comparison—and thus, any arithmetic, such as summation—impossible (i.e., undefined and meaningless).
This means that we cannot, in fact, add up two people’s decision-theoretic utilities for any given outcome.
I think your first objection is technically correct, but irrelevant to the point I was making; and your second objection is entirely consistent with my conclusion.
On “mistake one”: I am using “I assign utility U to living a happy life” as a shorthand for something like “In general, the difference in utilities I assign between worlds in which I am happily alive, and worlds in which I am not, is U, all else being equal.” This is a perfectly normal sort of phrasing; for example, the wikipedia page on utility says that it “represents satisfaction experienced by the consumer from a good.” Do you object to this and any other talk of utility which isn’t phrased in terms of world-states?
On “mistake two”: I should have mentioned (and will edit to add) that economists don’t endorse interpersonal comparison of economic utility. But I’m not endorsing it either: I’m explicitly flagging it as a philosophical mistake, and explaining one reason why attempts to do so are misguided. This is more useful than simply saying that it’s ill-defined, because the latter leaves us to wonder why we can’t just construct a new way to compare utilities between people—for example, in another comment cousin_it is basically arguing for economic-style utility + interpersonal comparison.
Firstly, your redefinition of utility values assumes that the difference within any pair of world-states which differ in some fixed way is constant, regardless of what other properties those world-states have. That does not seem a likely assumption to me, and in any case must be stated explicitly and defended (and I expect you will have some difficulty defending it).
More importantly, even if we agree to this quite significant assumption, what then? If we understand that it’s world-states that we’re concerned with, then the notion of “double-counting” is simply inappropriate. Each person’s valuation of a given world-state counts separately. Why should it not? Importantly, I do not see how your objection about families, etc., can be constructed in such a framework—even if you do the transformation to “relative utilities” that you propose!
Re: mistake two:
If you agree that interpersonal utility comparison is a mistake, then we do seem to be on the same page.
On the other hand, if your stated reason for believing it to be a mistake is the “double-counting” issue, then that is a bad reason, because there is no double-counting! The right reason for viewing it as a mistake is that it’s simply undefined—mathematical nonsense.
Okay, so it’s a mistake because it’s simply undefined mathematical nonsense. Now let me define a new form of utility which differs from economic utility only by the fact that interpersonal comparisons are allowed, and occur in whatever way you think is most reasonable. How do you feel about using this new form of utility to draw moral conclusions? I think my arguments are relevant to that question.
Re mistake one:
I’m not assuming that the difference within any pair of world states which differ in a certain way is constant any more than an economist is when they say “let X be the utility that is gained from consuming one unit of good Y”. Both are approximations, but both are useful approximations.
If you’d prefer, I can formalise the situation more precisely terms of world-states. For each world-state, each member of the family assigns it utility equal to the number of family members still alive. So if they all die, that’s 0. If they all survive, that’s 5, and then the total utility from all of them is 25 (assuming we’re working in my “new form of utility” from above, where we can do interpersonal addition).
Meanwhile each loner assigns 1 utility to worlds in which they survive, and 0 otherwise. So now, if we think that maximising utility is moral, we’d say it’s more moral to kill 24 loners than one family of 5, even though each individual values their own life equally. I think that this conclusion is unacceptable, and so it is a reductio of the idea that we should maximise any quantity similar to economic utility.
Okay, so it’s a mistake because it’s simply undefined mathematical nonsense. Now let me define a new form of utility which differs from economic utility only by the fact that interpersonal comparisons are allowed, and occur in whatever way you think is most reasonable. How do you feel about using this new form of utility to draw moral conclusions?
My feelings about this new form of utility is “this definition is incoherent”. It can’t be used to draw moral conclusions because it’s a nonsensical concept in the first place.
That interpersonal utility comparisons are impossible in VNM utility is not some incidental fact, it is an inevitable consequence of the formalism’s assumptions. If you believe a different formalism—one without that consequence—is possible, I should very much like to hear about it… not to mention the fact that if you were to discover such a thing, tremendous fame and glory, up to and possibly even including a Nobel Prize, would be yours!
I’m not assuming that the difference within any pair of world states which differ in a certain way is constant any more than an economist is when they say “let X be the utility that is gained from consuming one unit of good Y”.
Just because economists sometimes say a thing, does not make that thing any less nonsensical. (If you doubt this, read any of Oskar Morgenstern’s work, for instance.)
If you’d prefer, I can formalise the situation more precisely terms of world-states. [details snipped]
What if the loner assigns 50 utility to worlds in which they survive? Or 500? Then would we say that it’s more moral to kill many families than than to kill one loner?
This problem has absolutely nothing to do with any “double-counting”, and everything to do with the obvious absurdities that result when you simply allow anyone to assign any arbitrary number they like to world-states, and then treat those numbers as if, somehow, they are on the same scale. I should hardly need to point out how silly that is. (And this is before we get into the more principled issues with interpersonal comparisons, of course.)
The first question in any such scenario has to be: “Where are these numbers coming from, and what do they mean?” If we can’t answer it in a rigorous way, then the discussion is moot.
That interpersonal utility comparisons are impossible in VNM utility is not some incidental fact, it is an inevitable consequence of the formalism’s assumptions.
Any consequence of a formalism’s assumptions is inevitable, so I don’t see what you mean. This happens to be an inevitable consequence which you can easily change just by adding a normalisation assumption. The wikipedia page for social choice theory is all about how social choice theorists compare utilities interpersonally—and yes, Amartya Sen did win a Nobel prize for related work. Mostly they use partial comparison, but there have been definitions of total comparison which aren’t “nonsensical”.
The first question in any such scenario has to be: “Where are these numbers coming from, and what do they mean?” If we can’t answer it in a rigorous way, then the discussion is moot.
I agree that if you’re trying to formulate a moral theory, then you need to come up with such numbers. My point is that, once you have come up with your numbers, then you need to solve the issue that I present. You may not think this is useful, but there are plenty of people who believe in desire utilitarianism; this is aimed at them.
This is incorrect, and the mistake is a critical flaw in your reasoning; but, ironically, it is also incorrect in a second, totally different way, which makes your reasoning come out right in the end (but only by coincidence).
Mistake One: Decision-theoretic utility attaches to outcomes, and ‘outcomes’ here must be understood to be world-states—not component of world-states. In other words, your example would properly go like this:
Say you assign utility U to the outcome “I live a happy life, and so does my wife”; and your wife assigns utility U to the outcome “I live a happy life, and so does my husband”. Adding these gives us 2U; no double-counting occurs, nor is it ever possible for double-counting to occur.
However, celebrations are premature, because then there is…
Mistake Two: As per the Von Neumann–Morgenstern utility theorem (which tells us that the preferences of any agent from which a utility function may be constructed, a.k.a. any agent that “has” a utility function, must comply with the VNM axioms), an agent’s utility function is defined only up to positive affine transformation. This makes interpersonal utility comparison—and thus, any arithmetic, such as summation—impossible (i.e., undefined and meaningless).
This means that we cannot, in fact, add up two people’s decision-theoretic utilities for any given outcome.
I think your first objection is technically correct, but irrelevant to the point I was making; and your second objection is entirely consistent with my conclusion.
On “mistake one”: I am using “I assign utility U to living a happy life” as a shorthand for something like “In general, the difference in utilities I assign between worlds in which I am happily alive, and worlds in which I am not, is U, all else being equal.” This is a perfectly normal sort of phrasing; for example, the wikipedia page on utility says that it “represents satisfaction experienced by the consumer from a good.” Do you object to this and any other talk of utility which isn’t phrased in terms of world-states?
On “mistake two”: I should have mentioned (and will edit to add) that economists don’t endorse interpersonal comparison of economic utility. But I’m not endorsing it either: I’m explicitly flagging it as a philosophical mistake, and explaining one reason why attempts to do so are misguided. This is more useful than simply saying that it’s ill-defined, because the latter leaves us to wonder why we can’t just construct a new way to compare utilities between people—for example, in another comment cousin_it is basically arguing for economic-style utility + interpersonal comparison.
Re: mistake one:
Firstly, your redefinition of utility values assumes that the difference within any pair of world-states which differ in some fixed way is constant, regardless of what other properties those world-states have. That does not seem a likely assumption to me, and in any case must be stated explicitly and defended (and I expect you will have some difficulty defending it).
More importantly, even if we agree to this quite significant assumption, what then? If we understand that it’s world-states that we’re concerned with, then the notion of “double-counting” is simply inappropriate. Each person’s valuation of a given world-state counts separately. Why should it not? Importantly, I do not see how your objection about families, etc., can be constructed in such a framework—even if you do the transformation to “relative utilities” that you propose!
Re: mistake two:
If you agree that interpersonal utility comparison is a mistake, then we do seem to be on the same page.
On the other hand, if your stated reason for believing it to be a mistake is the “double-counting” issue, then that is a bad reason, because there is no double-counting! The right reason for viewing it as a mistake is that it’s simply undefined—mathematical nonsense.
Re mistake two:
Okay, so it’s a mistake because it’s simply undefined mathematical nonsense. Now let me define a new form of utility which differs from economic utility only by the fact that interpersonal comparisons are allowed, and occur in whatever way you think is most reasonable. How do you feel about using this new form of utility to draw moral conclusions? I think my arguments are relevant to that question.
Re mistake one:
I’m not assuming that the difference within any pair of world states which differ in a certain way is constant any more than an economist is when they say “let X be the utility that is gained from consuming one unit of good Y”. Both are approximations, but both are useful approximations.
If you’d prefer, I can formalise the situation more precisely terms of world-states. For each world-state, each member of the family assigns it utility equal to the number of family members still alive. So if they all die, that’s 0. If they all survive, that’s 5, and then the total utility from all of them is 25 (assuming we’re working in my “new form of utility” from above, where we can do interpersonal addition).
Meanwhile each loner assigns 1 utility to worlds in which they survive, and 0 otherwise. So now, if we think that maximising utility is moral, we’d say it’s more moral to kill 24 loners than one family of 5, even though each individual values their own life equally. I think that this conclusion is unacceptable, and so it is a reductio of the idea that we should maximise any quantity similar to economic utility.
My feelings about this new form of utility is “this definition is incoherent”. It can’t be used to draw moral conclusions because it’s a nonsensical concept in the first place.
That interpersonal utility comparisons are impossible in VNM utility is not some incidental fact, it is an inevitable consequence of the formalism’s assumptions. If you believe a different formalism—one without that consequence—is possible, I should very much like to hear about it… not to mention the fact that if you were to discover such a thing, tremendous fame and glory, up to and possibly even including a Nobel Prize, would be yours!
Just because economists sometimes say a thing, does not make that thing any less nonsensical. (If you doubt this, read any of Oskar Morgenstern’s work, for instance.)
What if the loner assigns 50 utility to worlds in which they survive? Or 500? Then would we say that it’s more moral to kill many families than than to kill one loner?
This problem has absolutely nothing to do with any “double-counting”, and everything to do with the obvious absurdities that result when you simply allow anyone to assign any arbitrary number they like to world-states, and then treat those numbers as if, somehow, they are on the same scale. I should hardly need to point out how silly that is. (And this is before we get into the more principled issues with interpersonal comparisons, of course.)
The first question in any such scenario has to be: “Where are these numbers coming from, and what do they mean?” If we can’t answer it in a rigorous way, then the discussion is moot.
Any consequence of a formalism’s assumptions is inevitable, so I don’t see what you mean. This happens to be an inevitable consequence which you can easily change just by adding a normalisation assumption. The wikipedia page for social choice theory is all about how social choice theorists compare utilities interpersonally—and yes, Amartya Sen did win a Nobel prize for related work. Mostly they use partial comparison, but there have been definitions of total comparison which aren’t “nonsensical”.
I agree that if you’re trying to formulate a moral theory, then you need to come up with such numbers. My point is that, once you have come up with your numbers, then you need to solve the issue that I present. You may not think this is useful, but there are plenty of people who believe in desire utilitarianism; this is aimed at them.