For economists, it is common to use a monotone transformation of a utility function in order to make it more tractable in a particular case. Such a transformation preserves the ordering of choices, though not the absolute relationships between them, so if an outcome were preferred in the transformed case it would also be preferred in the original case, and consumption decisions are retained.
This would be a problem for ethicists, because there is a serious difference between, say, U(x,y) = e^x * y and U(x,y) = x + log y, when deciding the outcome of an action. Economists would note that consumption behavior was essentially fixed if given prices, and be unsurprised. Ethicists would have to see the e^x and conclude that humanity should essentially spend the rest of its waking days creating xs; not so in the second function. Of course, the latter function is merely the log-transformation of the former.
ETA: Well, the economist would be a little surprised at the first utility function, because they don’t tend to see or postulate things quite that extreme. But it wouldn’t be problematic.
Ethicists would have to see the e^x and conclude that humanity should essentially spend the rest of its waking days creating xs; not so in the second function.
I was unclear in the setup: the utility function isn’t supposed to reflect a representative agent for all humanity, but one individual or proper subset of individuals within humanity (if used to be “the human utility function,” then you are certainly right that only xs would be produced after everybody in humanity had 1 y, for either U-function).
Imagine we make 100 more units of x. With the second function, it doesn’t matter whether we spread these out over 100 people or give them all to one, ethically—they produce the same quantity of utility. In particular, the additional utility produced in the second function per x is always 1.
In the first function, there is a serious difference between distributing the xs and concentrating them in one person—a difference brought out by sum utillitarianism vs. average utilitarianism vs. Rawlsian theory.
I use e^x as an example, but it would be superseded by somebody with e^e^x or x! or x^^X etc.
You seem to be assuming that your U(x) is per-person, so that each person a would have a separate Uₐ(x) = xₐ + log yₐ (or whatever), where xₐ is how much x that person has and yₐ is how much y that person has.
You then imply a universal or societal “overall” utility function of the form V(x) = ∑( Uₐ(x) ) over all a.
Your fallacy is in applying the log transform to the individual Uₐ(x) functions rather than to the top-level function V(x) as a whole.
You then imply a universal or societal “overall” utility function of the form V(x) = ∑( Uₐ(x) ) over all a.
I wasn’t intending to imply that the society had homogeneous or even transferable utility functions—that was the substance of my clarification from the previous post.
Your fallacy is in applying the log transform to the individual Uₐ(x) functions rather than to the top-level function V(x) as a whole.
Insofar as there is no decision-maker at the top level, it wouldn’t make much sense to do so. The transform is just used (by economists) to compute individuals’ decisions in a mathematically simpler format, typically by separating a Cobb-Douglas function into two terms.
The point is that for economists, the two functions produce the same results—people buy the same things, make the same decisions, etc. You cannot aggregate economists’ utility functions outside of using a proxy like money. For ethicists, the exact form of the utility function is important, and aggregation is possible—and that’s the problem I’m trying to identify.
Agreed—that’s related to what I’m arguing. In particular, utility would have to be transferable, and we’d have to know the form of the function in some detail. Not clear that either of those can be resolved.
Imagine we make 100 more units of x. With the second function, it doesn’t matter whether we spread these out over 100 people or give them all to one, ethically—they produce the same quantity of utility. In particular, the additional utility produced in the second function per x is always 1.
That is precisely the ethical aggregation of utility I am arguing against. You’re right—an ethicist trying to use utility will have to aggregate. Thus, the form of the individual utility functions matters a great deal, if we believe we can do that.
We can’t apply log-transforms, in the ethical sense against which I am arguing, because the form of the function matters.
If I’m still following the thread of this conversation correctly, the major alternative on the table is the behavior of the hypothetical economist, who presumably chooses to aggregate individual utilities via free-market interactions.
By what standard—that is to say, by what utility function—are we judging whether the economist or the naive-ethicist-who-aggregates-by-addition is right (or if both are totally wrong)?
I’m not sure there is anything (yet?) available for the naive-ethicist to sum. The economist’s argument, generally construed, may be that we do not know how to and possibly cannot construct a consistent function for individuals, the best we can do is to allow those individuals to search for local maxima under conditions that mostly keep them from inhibiting the searches of others.
In some sense, the economist is advocating a distributed computation of the global maximum utility.
It’s not clear that we can talk meaningfully about a meta-utility function over using the economist’s or ethicist’s aggregative functions. Wouldn’t determining that meta-function be the same question as determining the correct aggregative function directly?
In short, absent better options, I think there’s not much to do other than structure the system as best we can to allow that computation—and at most, institute targeted programs to eliminate the most obvious disutilities at the most minimal impact to others’ utilities.
These constructions deal in should-judgment, implying that the economist, the ethicist, and we are at least attempting to discuss a meta-utility function, even if we don’t or can’t know what it is.
Wouldn’t determining that meta-function be the same question as determining the correct aggregative funciton directly?
Yes.
Just because the question is very, very hard doesn’t mean there’s no answer.
Just because the question is very, very hard doesn’t mean there’s no answer.
Definitely true. That’s why I said “yet?” It may be possible in the future to develop something like a general individual utility function, but we certainly do not have that now.
Perhaps I’m confused. The meta-utility function—isn’t that literally identical to the social utility function? Beyond the social function, utilitarianism/consequentialism isn’t making tradeoffs—the goal of the whole philosophy is to maximize the utility of some group, and once we’ve defined that group (a task for which we cannot use a utility function without infinite regress), the rest is a matter of the specific form.
The meta-utility function—isn’t that literally identical to the social utility function?
Yes. The problem is that we can’t actually calculate with it because the only information we have about it is vague intuitions, some of which may be wrong.
You seem to be assuming that your U(x) is per-person, so that each person p would have a separate U_p(x) = x_p + log y_p (or whatever), where x_p is how much x that person has and y_p is how much y that person has.
You then imply a universal or societal “overall” utility function of the form V(x) = summation( U_p(x) ) over all p.
Your fallacy is in applying the log transform to the individual U_p(x) functions rather than to the top-level function V(x) as a whole.
Agreed, and as a further illustration:
For economists, it is common to use a monotone transformation of a utility function in order to make it more tractable in a particular case. Such a transformation preserves the ordering of choices, though not the absolute relationships between them, so if an outcome were preferred in the transformed case it would also be preferred in the original case, and consumption decisions are retained.
This would be a problem for ethicists, because there is a serious difference between, say, U(x,y) = e^x * y and U(x,y) = x + log y, when deciding the outcome of an action. Economists would note that consumption behavior was essentially fixed if given prices, and be unsurprised. Ethicists would have to see the e^x and conclude that humanity should essentially spend the rest of its waking days creating xs; not so in the second function. Of course, the latter function is merely the log-transformation of the former.
ETA: Well, the economist would be a little surprised at the first utility function, because they don’t tend to see or postulate things quite that extreme. But it wouldn’t be problematic.
Why not so in the second function?
I was unclear in the setup: the utility function isn’t supposed to reflect a representative agent for all humanity, but one individual or proper subset of individuals within humanity (if used to be “the human utility function,” then you are certainly right that only xs would be produced after everybody in humanity had 1 y, for either U-function).
Imagine we make 100 more units of x. With the second function, it doesn’t matter whether we spread these out over 100 people or give them all to one, ethically—they produce the same quantity of utility. In particular, the additional utility produced in the second function per x is always 1.
In the first function, there is a serious difference between distributing the xs and concentrating them in one person—a difference brought out by sum utillitarianism vs. average utilitarianism vs. Rawlsian theory.
I use e^x as an example, but it would be superseded by somebody with e^e^x or x! or x^^X etc.
You seem to be assuming that your U(x) is per-person, so that each person a would have a separate Uₐ(x) = xₐ + log yₐ (or whatever), where xₐ is how much x that person has and yₐ is how much y that person has.
You then imply a universal or societal “overall” utility function of the form V(x) = ∑( Uₐ(x) ) over all a.
Your fallacy is in applying the log transform to the individual Uₐ(x) functions rather than to the top-level function V(x) as a whole.
I wasn’t intending to imply that the society had homogeneous or even transferable utility functions—that was the substance of my clarification from the previous post.
Insofar as there is no decision-maker at the top level, it wouldn’t make much sense to do so. The transform is just used (by economists) to compute individuals’ decisions in a mathematically simpler format, typically by separating a Cobb-Douglas function into two terms.
The point is that for economists, the two functions produce the same results—people buy the same things, make the same decisions, etc. You cannot aggregate economists’ utility functions outside of using a proxy like money. For ethicists, the exact form of the utility function is important, and aggregation is possible—and that’s the problem I’m trying to identify.
I don’t see how aggregating utility functions is possible without some unjustifiable assumptions.
Agreed—that’s related to what I’m arguing. In particular, utility would have to be transferable, and we’d have to know the form of the function in some detail. Not clear that either of those can be resolved.
How does:
not imply V(x) = ∑( Uₐ(x) ) ?
That is precisely the ethical aggregation of utility I am arguing against. You’re right—an ethicist trying to use utility will have to aggregate. Thus, the form of the individual utility functions matters a great deal, if we believe we can do that.
We can’t apply log-transforms, in the ethical sense against which I am arguing, because the form of the function matters.
I agree that correct aggregation is nontrivial.
If I’m still following the thread of this conversation correctly, the major alternative on the table is the behavior of the hypothetical economist, who presumably chooses to aggregate individual utilities via free-market interactions.
By what standard—that is to say, by what utility function—are we judging whether the economist or the naive-ethicist-who-aggregates-by-addition is right (or if both are totally wrong)?
Ah yes, there’s the key.
I’m not sure there is anything (yet?) available for the naive-ethicist to sum. The economist’s argument, generally construed, may be that we do not know how to and possibly cannot construct a consistent function for individuals, the best we can do is to allow those individuals to search for local maxima under conditions that mostly keep them from inhibiting the searches of others.
In some sense, the economist is advocating a distributed computation of the global maximum utility.
It’s not clear that we can talk meaningfully about a meta-utility function over using the economist’s or ethicist’s aggregative functions. Wouldn’t determining that meta-function be the same question as determining the correct aggregative function directly?
In short, absent better options, I think there’s not much to do other than structure the system as best we can to allow that computation—and at most, institute targeted programs to eliminate the most obvious disutilities at the most minimal impact to others’ utilities.
These constructions deal in should-judgment, implying that the economist, the ethicist, and we are at least attempting to discuss a meta-utility function, even if we don’t or can’t know what it is.
Yes.
Just because the question is very, very hard doesn’t mean there’s no answer.
Definitely true. That’s why I said “yet?” It may be possible in the future to develop something like a general individual utility function, but we certainly do not have that now.
Perhaps I’m confused. The meta-utility function—isn’t that literally identical to the social utility function? Beyond the social function, utilitarianism/consequentialism isn’t making tradeoffs—the goal of the whole philosophy is to maximize the utility of some group, and once we’ve defined that group (a task for which we cannot use a utility function without infinite regress), the rest is a matter of the specific form.
Yes. The problem is that we can’t actually calculate with it because the only information we have about it is vague intuitions, some of which may be wrong.
If only we were self-modifying intelligences… ;)
You seem to be assuming that your U(x) is per-person, so that each person p would have a separate U_p(x) = x_p + log y_p (or whatever), where x_p is how much x that person has and y_p is how much y that person has.
You then imply a universal or societal “overall” utility function of the form V(x) = summation( U_p(x) ) over all p.
Your fallacy is in applying the log transform to the individual U_p(x) functions rather than to the top-level function V(x) as a whole.
I was going to say that the second function punishes you if you don’t provide at least a little y, but that’s true of the first function too.