Only a slightly relevant question which nevertheless I haven’t yet seen addressed: If a utilitarian desires to maximise other people’s utilities and the other people are utilitarians themselves, also deriving their utility from the utilities of others (the original utilitarian included), doesn’t that make utilitarianism impossible to define? The consensus seems to be that one can’t take one’s own mental states for argument of one’s own utility function. But utilitarians rarely object to plugging others’ mental states into their utility functions, so the danger of circularity isn’t avoided. Is there some clever solution to this?
No, because a utilitarianism does not specify a utilitarian’s desires; it specifies what they consider moral. There are lots of things we desire to do that aren’t moral, and that we choose not to do because they are not moral.
I believe this doesn’t answer my question; I will reformulate the problem in order to remove potentially problematic words and make it more specific:
Let the world contain at least two persons, P1 and P2 with utility functions U1 and U2. Both are traditional utilitarians: they value happiness of the others. Assume that U1 is a sum of two terms: H2 + u1(X), where H2 is some measure of happiness of P2 and u1(X) represents P1′s utility unrelated to P2′s happiness, X is the state of the rest of the world; similarly U2 = H1 + u2(X). (H1 and H2 are monotonous functions of happiness but not necessarily linear—whatever it would even mean—so having U as linear function of H is still quite general.)
Also, as for most people, the happiness of the model utilitarians is correlated with their utility. Let’s again assume that the utilities decompose into sums of independent terms such that H1 = h1(U1) + w1(X), where w contains all non-utility sources of happiness and h1(.) is a growing function; similarly for the second agent.
So we have:
U1 = h2(U2) + w2(X) + u1(X)
U2 = h1(U1) + w1(X) + u2(X)
Whether this does or doesn’t have solution (for U1 and U2) depends on details of h1, h2, u1, u2, w1, w2 and X. But what I say is that the system of equations is a direct analogue of the forbidden
U = h(U) + u(X)
i.e. when one’s utility function takes itself for an argument.
Also, as for most people, the happiness of the model utilitarians is correlated with their utility.
This is untrue in general. I would prefer that someone who I am unaware of be happy, but it cannot make me happier since I am unaware of that person. In general, it is important to draw a distinction between the concept of a utility function, which describes decisions being made, and that of a hedonic function, which describes happiness, or, if you are not purely a hedonic utilitarian, whatever functions describe other things that are mentioned in, but not identical to, your utility function.
Yes, I may not know the exact value of my utility since I don’t know the value of every argument it takes, and yes, there are consequently changes in utility which aren’t accompanied with corresponding changes in happiness, but no, this doesn’t mean that utility and happiness aren’t correlated. Your comment would be a valid objection to relevance of my original question only if happiness and utility were strictly isolated and independent of each other, which, for most people, isn’t the case.
Also, this whole issue could be sidestepped if the utility function of the first agent had the utility of the second agent as argument directly, without the intermediation of happiness. I am not sure, however, whether standard utilitarianism allows caring about other agent’s utilities.
There may be many people who’s utility you are not aware of, but there are also many people whos utility you are aware of, and whos utility you can effect with your actions. I think @prase points are quite interesting just considering the ones in your awareness/ sphere of influence.
I’m not sure exactly why prase disagrees with me—I can think of many mutually exclusive reasons that it would take a while to write out individually—but since two people have now responded I guess I should ask for clarification. Why is the scenario described impossible?
Imagine that everyone starts at time t1 with some level of utility, U[n]. Now, they generate a utility based on their beliefs about the sum of everyone else’s utility (at time t1). Then they update by adding some function of that summed (averaged, whatever) utility to their own happiness. Let’s assume that function is some variant of the sigmoid function. This is actually probably not too far off from reality. Now we know that the maximum happiness (from the utility of others) that a person can have is one (and the minimum is negative one). And assuming that most people’s base level of happiness is somewhat larger than the effect of utility, this is going to be a reasonably stable system.
This is a much more reasonable model, since we live in a time-varying world, and our beliefs about that world change over time as we gain more information.
When information propagates fast relative to the rate of change of external conditions, the dynamic model converges to the stable point which would be the solution of the static model—are the models really different in any important aspect?
Instability is indeed eliminated by use of sigmoid functions, but then the utility gained from happiness (of others) is bounded. Bounded utility functions solve many problems, the “repugnant conclusion” of the OP included, but some prominent LWers object to their use, pointing out scope insensitivity. (I have personally no problems with bounded utilities.)
I think you are on to something brilliant here. The thing that is new to me in your question is the recursive aspect of utilitarianism. A theory of morality that says the moral thing to to do is to maximize utility, clearly then maximizing utility is a thing that has utility.
From here in an engineering sense, you’d have at least two different places you could go. A sort of naive place to go would be to try to have each person maximize total utility independently of what others are doing, noting that other people’s utility summed up is much larger than one’s own utility. Then to a very large extent your behavior will be driven by maximizing other people’s utility. In a naive design involving say 100 utilitarians, one would be “over-driving” the system by ~100 x, if each utilitarian was separately calculating everybody else’s utility and trying to maximize it. In some sense, it would be like a feedback system with way too much gain: 99 people all trying to maximize your utility.
An alternative place to go would be to say utility is a meta-ethical consideration, that an ethical system should have the property that it maximizes total utility. But then from engineering considerations you would expect 1) you would have lots of different rule systems that would come close to maximizing utility and 2) among the simplest and most effective would be to have each agent maximizing its own utility under the constraint of rules which were designed to get rid of anti-synergistic effects and to enhance synergistic effects. So you would expect contract law, anti-fraud law, laws against bad externalities, laws requiring participation in good externalities. But in terms of “feedback,” each agent in the system would be actively adjusting to maximize its own utility within the constraints of the rules.
This might be called rule-utilitarianism, but really I think it is a hybrid of rule utilitarianism and justified selfishness (Rand’s egoism? Economics “homo economicus” rational utility maximizer?). It is a hybrid because you don’t ONLY have rules which maximize utility, and you don’t ONLY have maximizing individual utility as the moral rule.
Only a slightly relevant question which nevertheless I haven’t yet seen addressed: If a utilitarian desires to maximise other people’s utilities and the other people are utilitarians themselves, also deriving their utility from the utilities of others (the original utilitarian included), doesn’t that make utilitarianism impossible to define? The consensus seems to be that one can’t take one’s own mental states for argument of one’s own utility function. But utilitarians rarely object to plugging others’ mental states into their utility functions, so the danger of circularity isn’t avoided. Is there some clever solution to this?
No, because a utilitarianism does not specify a utilitarian’s desires; it specifies what they consider moral. There are lots of things we desire to do that aren’t moral, and that we choose not to do because they are not moral.
I believe this doesn’t answer my question; I will reformulate the problem in order to remove potentially problematic words and make it more specific:
Let the world contain at least two persons, P1 and P2 with utility functions U1 and U2. Both are traditional utilitarians: they value happiness of the others. Assume that U1 is a sum of two terms: H2 + u1(X), where H2 is some measure of happiness of P2 and u1(X) represents P1′s utility unrelated to P2′s happiness, X is the state of the rest of the world; similarly U2 = H1 + u2(X). (H1 and H2 are monotonous functions of happiness but not necessarily linear—whatever it would even mean—so having U as linear function of H is still quite general.)
Also, as for most people, the happiness of the model utilitarians is correlated with their utility. Let’s again assume that the utilities decompose into sums of independent terms such that H1 = h1(U1) + w1(X), where w contains all non-utility sources of happiness and h1(.) is a growing function; similarly for the second agent.
So we have:
U1 = h2(U2) + w2(X) + u1(X)
U2 = h1(U1) + w1(X) + u2(X)
Whether this does or doesn’t have solution (for U1 and U2) depends on details of h1, h2, u1, u2, w1, w2 and X. But what I say is that the system of equations is a direct analogue of the forbidden
U = h(U) + u(X)
i.e. when one’s utility function takes itself for an argument.
This is untrue in general. I would prefer that someone who I am unaware of be happy, but it cannot make me happier since I am unaware of that person. In general, it is important to draw a distinction between the concept of a utility function, which describes decisions being made, and that of a hedonic function, which describes happiness, or, if you are not purely a hedonic utilitarian, whatever functions describe other things that are mentioned in, but not identical to, your utility function.
Yes, I may not know the exact value of my utility since I don’t know the value of every argument it takes, and yes, there are consequently changes in utility which aren’t accompanied with corresponding changes in happiness, but no, this doesn’t mean that utility and happiness aren’t correlated. Your comment would be a valid objection to relevance of my original question only if happiness and utility were strictly isolated and independent of each other, which, for most people, isn’t the case.
Also, this whole issue could be sidestepped if the utility function of the first agent had the utility of the second agent as argument directly, without the intermediation of happiness. I am not sure, however, whether standard utilitarianism allows caring about other agent’s utilities.
There may be many people who’s utility you are not aware of, but there are also many people whos utility you are aware of, and whos utility you can effect with your actions. I think @prase points are quite interesting just considering the ones in your awareness/ sphere of influence.
I’m not sure exactly why prase disagrees with me—I can think of many mutually exclusive reasons that it would take a while to write out individually—but since two people have now responded I guess I should ask for clarification. Why is the scenario described impossible?
Here’s another way to look at it:
Imagine that everyone starts at time t1 with some level of utility, U[n]. Now, they generate a utility based on their beliefs about the sum of everyone else’s utility (at time t1). Then they update by adding some function of that summed (averaged, whatever) utility to their own happiness. Let’s assume that function is some variant of the sigmoid function. This is actually probably not too far off from reality. Now we know that the maximum happiness (from the utility of others) that a person can have is one (and the minimum is negative one). And assuming that most people’s base level of happiness is somewhat larger than the effect of utility, this is going to be a reasonably stable system.
This is a much more reasonable model, since we live in a time-varying world, and our beliefs about that world change over time as we gain more information.
When information propagates fast relative to the rate of change of external conditions, the dynamic model converges to the stable point which would be the solution of the static model—are the models really different in any important aspect?
Instability is indeed eliminated by use of sigmoid functions, but then the utility gained from happiness (of others) is bounded. Bounded utility functions solve many problems, the “repugnant conclusion” of the OP included, but some prominent LWers object to their use, pointing out scope insensitivity. (I have personally no problems with bounded utilities.)
Utility functions need not be bounded, so long as their contribution to happiness is bounded.
I think you are on to something brilliant here. The thing that is new to me in your question is the recursive aspect of utilitarianism. A theory of morality that says the moral thing to to do is to maximize utility, clearly then maximizing utility is a thing that has utility.
From here in an engineering sense, you’d have at least two different places you could go. A sort of naive place to go would be to try to have each person maximize total utility independently of what others are doing, noting that other people’s utility summed up is much larger than one’s own utility. Then to a very large extent your behavior will be driven by maximizing other people’s utility. In a naive design involving say 100 utilitarians, one would be “over-driving” the system by ~100 x, if each utilitarian was separately calculating everybody else’s utility and trying to maximize it. In some sense, it would be like a feedback system with way too much gain: 99 people all trying to maximize your utility.
An alternative place to go would be to say utility is a meta-ethical consideration, that an ethical system should have the property that it maximizes total utility. But then from engineering considerations you would expect 1) you would have lots of different rule systems that would come close to maximizing utility and 2) among the simplest and most effective would be to have each agent maximizing its own utility under the constraint of rules which were designed to get rid of anti-synergistic effects and to enhance synergistic effects. So you would expect contract law, anti-fraud law, laws against bad externalities, laws requiring participation in good externalities. But in terms of “feedback,” each agent in the system would be actively adjusting to maximize its own utility within the constraints of the rules.
This might be called rule-utilitarianism, but really I think it is a hybrid of rule utilitarianism and justified selfishness (Rand’s egoism? Economics “homo economicus” rational utility maximizer?). It is a hybrid because you don’t ONLY have rules which maximize utility, and you don’t ONLY have maximizing individual utility as the moral rule.