# Total Utility is Illusionary

(Ab­stract: We have the no­tion that peo­ple can have a “to­tal util­ity” value, defined per­haps as the sum of all their changes in util­ity over time. This is usu­ally not a use­ful con­cept, be­cause util­ity func­tions can change. In many cases the less-con­fus­ing ap­proach is to look only at the util­ity from each in­di­vi­d­ual de­ci­sion, and not at­tempt to con­sider the to­tal over time. This leads to in­sights about util­i­tar­i­anism.)

Let’s con­sider the util­ity of a fel­low named Bob. Bob likes to track his to­tal util­ity; he writes it down in a log­book ev­ery night.

Bob is a stamp col­lec­tor; he gets +1 utilon ev­ery time he adds a stamp to his col­lec­tion, and he gets −1 utilon ev­ery time he re­moves a stamp from his col­lec­tion. Bob’s util­ity was zero when his col­lec­tion was empty, so we can say that Bob’s to­tal util­ity is the num­ber of stamps in his col­lec­tion.

One day a movie the­ater opens, and Bob learns that he likes go­ing to movies. Bob counts +10 utilons ev­ery time he sees a movie. Now we can say that Bob’s to­tal util­ity is the num­ber of stamps in his col­lec­tion, plus ten times the num­ber of movies he has seen.

(A note on ter­minol­ogy: I’m say­ing that Bob’s util­ity func­tion is the thing that emits +1 or −1 or +10, and his to­tal util­ity is the sum of all those emits over time. I’m not sure if this is stan­dard ter­minol­ogy.)

This should strike us as a lit­tle bit strange: Bob now has a term in his to­tal util­ity which is mostly based on his­tory, and mostly in­de­pen­dent of the pre­sent state of the world. Tech­ni­cally, we might hand­wave and say that Bob places value on his mem­o­ries of watch­ing those movies. But Bob knows that’s not ac­tu­ally true: it’s the act of watch­ing the movies that he en­joys, and he rarely thinks about them once they’re over.

If a hyp­no­tist con­vinced Bob that he had watched ten billion movies, Bob would write down in his log­book that he had a hun­dred billion utilons. (Plus the num­ber of stamps in his stamp col­lec­tion.)

Let’s talk some more about that stamp col­lec­tion. Bob wakes up on June 14 and de­cides that he doesn’t like stamps any more. Now, Bob gets −1 utilon ev­ery time he adds a stamp to his col­lec­tion, and +1 utilon ev­ery time he re­moves one. What can we say about his to­tal util­ity? We might say that Bob’s to­tal util­ity is the num­ber of stamps in his col­lec­tion at the start of June 14, plus ten times the num­ber of movies he’s watched, plus the num­ber of stamps he re­moved from his col­lec­tion af­ter June 14. Or we might say that all Bob’s util­ity from his stamp col­lec­tion prior to June 14 was false util­ity, and we should strike it from the record books. Which an­swer is bet­ter?

...Really, nei­ther an­swer is bet­ter, be­cause the “to­tal util­ity” num­ber we’re dis­cussing just isn’t very use­ful. Bob has a very clear util­ity func­tion which emits num­bers like +1 and +10 and −1; he doesn’t gain any­thing by keep­ing track of the to­tal sep­a­rately. His to­tal util­ity doesn’t seem to track how happy he ac­tu­ally feels, ei­ther. It’s not clear what Bob gains from think­ing about this to­tal util­ity num­ber.

I think some of the con­fu­sion might be com­ing from Less Wrong’s fo­cus on AI de­sign.

When you’re writ­ing a util­ity func­tion for an AI, one thing you might try is to spec­ify your util­ity func­tion by spec­i­fy­ing the to­tal util­ity first: you might say “your to­tal util­ity is the num­ber of balls you have placed in this bucket” and then let the AI work out the im­ple­men­ta­tion de­tails of how happy each in­di­vi­d­ual ac­tion makes it.

How­ever, if you’re look­ing at util­ity func­tions for ac­tual peo­ple, you might en­counter some­thing weird like “I get +10 util­ity ev­ery time I watch a movie”, or “I woke up to­day and my util­ity func­tion changed”, and then if you try to com­pute the to­tal util­ity for that per­son, you can get con­fused.

Let’s now talk about util­i­tar­i­anism. For sim­plic­ity, let’s as­sume we’re talk­ing about a util­i­tar­ian gov­ern­ment which is mak­ing de­ci­sions on be­half of its con­stituency. (In other words, we’re not talk­ing about util­i­tar­i­anism as a moral the­ory.)

We have the no­tion of to­tal util­i­tar­i­anism, in which the gov­ern­ment tries to max­i­mize the sum of the util­ity val­ues of each of its con­stituents. This leads to “re­pug­nant con­clu­sion” is­sues in which the gov­ern­ment gen­er­ates new con­stituents at a high rate un­til all of them are mis­er­able.

We also have the no­tion of av­er­age util­i­tar­i­anism, in which the gov­ern­ment tries to max­i­mize the av­er­age of the util­ity val­ues of each of its con­stituents. This leads to is­sues—I’m not sure if there’s a snappy name—where the gov­ern­ment tries to kill off the least happy con­stituents so as to bring the av­er­age up.

The prob­lem with both of these no­tions is that they’re tak­ing the no­tion of “to­tal util­ity of all con­stituents” as an in­put, and then they’re chang­ing the num­ber of con­stituents, which changes the un­der­ly­ing util­ity func­tion.

I think the right way to do util­i­tar­i­anism is to ig­nore the “to­tal util­ity” thing; that’s not a real num­ber any­way. In­stead, ev­ery time you ar­rive at a de­ci­sion point, eval­u­ate what ac­tion to take by check­ing the util­ity of your con­stituents from each ac­tion. I pro­pose that we call this “delta util­i­tar­i­anism”, be­cause it isn’t look­ing at the to­tal or the av­er­age, just at the delta in util­ity from each ac­tion.

This solves the “re­pug­nant con­clu­sion” is­sue be­cause, at the time when you’re con­sid­er­ing adding more peo­ple, it’s more clear that you’re con­sid­er­ing the util­ity of your con­stituents at that time, which does not in­clude the po­ten­tial new peo­ple.

• In­stead, ev­ery time you ar­rive at a de­ci­sion point, eval­u­ate what ac­tion to take by check­ing the util­ity of your con­stituents from each ac­tion. I pro­pose that we call this “delta util­i­tar­i­anism”, be­cause it isn’t look­ing at the to­tal or the av­er­age, just at the delta in util­ity from each ac­tion.

If you look at the sum of all of the ac­tions if you choose op­tion A minus the sum of all the ac­tions if you take op­tion B, then all of the ac­tions un­til then will can­cel out, and you get just the differ­ence in util­ity be­tween op­tion A and op­tion B. They’re equiv­a­lent.

Tech­ni­cally, delta util­i­tar­i­anism is slightly more re­sis­tant to in­fini­ties. As long as any two ac­tions have a finite differ­ence, you can calcu­late it, even if the to­tal util­ity is in­finite. I don’t think that would be very helpful.

• I think the key differ­ence is that delta util­i­tar­i­anism han­dles it bet­ter when the group’s util­ity func­tion changes. For ex­am­ple, if I cre­ate a new per­son and add it to the group, that changes the group’s util­ity func­tion. Un­der delta util­i­tar­i­anism, I ex­plic­itly don’t count the prefer­ences of the new per­son when mak­ing that de­ci­sion. Un­der to­tal util­i­tar­i­anism, [most peo­ple would say that] I do count the prefer­ences of that new per­son.

• Un­der to­tal util­i­tar­i­anism, [most peo­ple would say that] I do count the prefer­ences of that new per­son.

You only count their prefer­ences un­der prefer­ence util­i­tar­i­anism. I never re­ally un­der­stood that form.

If you like hav­ing more happy peo­ple, then your util­ity func­tion is higher for wor­lds with lots of happy peo­ple, and cre­at­ing happy peo­ple makes the counter go up. If you like hav­ing hap­pier peo­ple, but don’t care how many there are, then hav­ing more peo­ple doesn’t do any­thing.

• If, as you pro­pose, you com­pletely ig­nore the welfare of peo­ple who don’t ex­ist yet, then it seems to me you will give rather odd an­swers to ques­tions like this: You and your part­ner are go­ing to have a baby. There is some spiffy new tech­nol­ogy that en­ables you to en­sure that the baby will not have nasty ge­netic con­di­tions like Down’s syn­drome, cys­tic fibro­sis, spina bifida, etc. For some rea­son the risk of these is oth­er­wise rather high. How much are you will­ing to pay for the new tech­nol­ogy to be ap­plied?

Your sys­tem will pre­sum­ably not make the an­swer be zero be­cause the par­ents will prob­a­bly be hap­pier with a healthier child. But it seems like the num­bers it pro­duces will be un­der­es­ti­mates.

(There are other things I find con­fus­ing and pos­si­bly-wrong in this pro­posal, but that may sim­ply in­di­cate that I haven’t un­der­stood it. But I’ll list some of them any­way. I don’t think any­one—“to­tal util­i­tar­ian” or not—wants to do util­ity calcu­la­tions in much like the way your hy­po­thet­i­cal Bob does; your pro­posal still needs a way of ag­gre­gat­ing util­ities across peo­ple, and that’s at least as prob­le­matic as ag­gre­gat­ing util­ities over time; the ar­gu­ment for the “re­pug­nant con­clu­sion” doesn’t in fact de­pend on ag­gre­gat­ing util­ities over time any­way; your sys­tem is plainly in­com­plete since it says noth­ing about how it does ag­gre­gate the util­ities of “your con­stituents” when mak­ing a de­ci­sion.)

• I can’t figure out what you’re try­ing to say.

When peo­ple talk about util­ity, they of­ten end up plagued by am­bi­guity in their defi­ni­tions. In the con­text of de­ci­sion the­ory, the do­main of a util­ity func­tion has to take into ac­count ev­ery­thing that the agent cares about and that their de­ci­sions effect (in­clud­ing events that hap­pen in the fu­ture, so it doesn’t make sense to talk about the util­ity an agent is ex­pe­rienc­ing at a par­tic­u­lar time), and the agent prefers prob­a­bil­ity dis­tri­bu­tions over out­comes that have higher ex­pected util­ity. In the con­text of clas­si­cal util­i­tar­i­anism, util­ity is ba­si­cally just a syn­onym for hap­piness. How are you try­ing to use the term here?

Edit: From your clar­ifi­ca­tion, it sounds like you’re ac­tu­ally talk­ing about ag­gre­gated group util­ity, rather than in­di­vi­d­ual util­ity, and just claiming that the util­ities be­ing ag­gre­gated should con­sist of the util­ity func­tions of the peo­ple who cur­rently ex­ist, not the util­ity func­tions of the peo­ple who will ex­ist in the out­come be­ing con­sid­ered. But I’m still con­fused be­cause your origi­nal ex­am­ple only referred to a sin­gle per­son.

• 15 Jun 2014 17:41 UTC
1 point

Sorry for post­ing a com­ment de­spite not re­ally think­ing about the mat­ter very stren­u­ously.

It seems to me that the post is about the for­mal­isms of tem­po­rary util­ity and con­stant util­ity.

It would seem to me that the stamps, as used in the text, would ac­tu­ally provide tem­po­rary util­ity when­ever they would make the per­son col­lect­ing them happy for a limited pe­riod of time, the same with watch­ing movies.

So if you want to do this to­tal util­ity thing, then per­haps it sim­ply needs to be for­mu­lated in the man­ner of util­ity over time. And to­tal util­ity would be ex­pected or av­er­age util­ity over time.

• It’s not ob­vi­ous that you’ve gained any­thing here. We can re­duce to to­tal util­i­tar­i­anism—just as­sume that ev­ery­one’s util­ity is zero at the de­ci­sion point. You still have the re­pug­nant con­clu­sion is­sue where you’re try­ing to de­cide whether to cre­ate more peo­ple or not based on sum­ming util­ities across pop­u­la­tions.

• I think there’s a definite differ­ence. As soon as you treat util­ity as part of de­ci­sion-mak­ing, rather than just an ab­stract thing-to-max­i­mize, you are al­lowed to break the sym­me­try be­tween ex­ist­ing peo­ple and nonex­ist­ing peo­ple.

If I want to take the ac­tion with the high­est to­tal delta-U, and some ac­tions cre­ate new peo­ple, the most straight­for­ward way to do it ac­tu­ally only takes the ac­tion with the high­est delta-U ac­cord­ing to cur­rently-ex­ist­ing peo­ple. This is ac­tu­ally my preferred solu­tion.

The sec­ond most straight­for­ward way is to take the ac­tion with the high­est delta-U ac­cord­ing to the peo­ple who ex­ist af­ter you take the ac­tion. This is bad be­cause it leads straight to kil­ling off all hu­mans and re­plac­ing them with eas­ily satis­fied ho­mun­culi. Or the not-as-re­pug­nant re­pug­nant con­clu­sion, if all you’re al­lowed to do is cre­ate ad­di­tional peo­ple.

• Wouldn’t the high­est delta-U be to mod­ify your­self so that you max­i­mize the util­ity of peo­ple as they are right now, and ig­nore fu­ture peo­ple even af­ter they’re born?

• Nope.

• Why not?

Let me try mak­ing this more ex­plicit.

Alice has util­ity func­tion A. Bob will have util­ity func­tion B, but he hasn’t been born yet.

You can make choices u or v, then once Bob is born, you get an­other choice be­tween x and y.

A(u) = 1, A(v) = 0, A(x) = 1, A(y) = 0

B(u) = 0, B(v) = 2, B(x) = 0, B(y) = 2

If you can’t pre­com­mit, you’ll do u the first time, for 1 util un­der A, and y the sec­ond, for 2 util un­der A+B (com­pared to 1 util for x).

If you can pre­com­mit, then you know if you don’t, you’ll pick uy. Precom­mit­ting to ux gives you +1 util un­der A, and since you’re still op­er­at­ing un­der A, that’s what you’ll do.

While I’m at it, you can also get into pris­oner’s dilemma with your fu­ture self, as fol­lows:

A(u) = 1, A(v) = 0, A(x) = 2, A(y) = 0

B(u) = −1, B(v) = 2, B(x) = −2, B(y) = 1

Note that this gives:

A+B(u) = 0, A+B(v) = 2, A+B(x) = 0, A+B(y) = 1

Now, un­der A, you’d want u for 1 util, and once Bob is born, un­der A+B you’d want y for 1 util.

But if you in­stead took vx, that would be worth 2 util for A and 2 util for A+B. So vx is bet­ter than uv both from Alice’s per­spec­tive and Alice+Bob’s per­spec­tive. Cer­tainly that would be a bet­ter op­tion.

• Sup­pose we build a robot that takes a cen­sus of cur­rently ex­ist­ing peo­ple, and a list of pos­si­ble ac­tions, and then takes the ac­tion that causes the biggest in­crease in util­ity of cur­rently ex­ist­ing peo­ple.

You come to this robot be­fore your ex­am­ple starts, and ask “Do you want to pre­com­mit to ac­tion vx, since that re­sults in higher to­tal util­ity?”

And the robot replies, “Does tak­ing this ac­tion of pre­com­mit­ment cause the biggest in­crease in util­ity of cur­rently ex­ist­ing peo­ple?”

“No, but you see, in one time step there’s this Bob guy who’ll pop into be­ing, and if you add in his util­ities from the be­gin­ning, by the end you’ll wish you’d pre­com­mit­ted.”

“Will wish­ing that I’d pre­com­mit­ted be the ac­tion that causes the biggest in­crease in util­ity of cur­rently ex­ist­ing peo­ple?”

You shake your head. “No...”

“Then I can’t re­ally see why I’d do such a thing.”

• And the robot replies, “Does tak­ing this ac­tion of pre­com­mit­ment cause the biggest in­crease in util­ity of cur­rently ex­ist­ing peo­ple?”

I’d say yes. It gives an ad­di­tional 1 util­ity to cur­rently ex­ist­ing peo­ple, since it en­sures that the robot will make a choice that peo­ple like later on.

Are you only count­ing the amount they value the world as it cur­rently is? For ex­am­ple, if some­one wants to be buried when they die, the robot wouldn’t ar­range it, be­cause by the time it hap­pens they won’t be in a state to ap­pre­ci­ate it?

• Ooooh. Okay, I see what you mean now—for some rea­son I’d in­ter­preted you as say­ing al­most the op­po­site.

Yup, I was wrong.

• My in­tended solu­tion was that, if you check the util­ity of your con­stituents from cre­at­ing more peo­ple, you’re ex­plic­itly not tak­ing the util­ity of the new peo­ple into ac­count. I’ll add a few sen­tences at the end of the ar­ti­cle to try to clar­ify this.

Another thing I can say is that, if you as­sume that ev­ery­one’s util­ity is zero at the de­ci­sion point, it’s not clear why you would see a util­ity gain from adding more peo­ple.

• Isn’t this equiv­a­lent to to­tal util­i­tar­i­anism that only takes into ac­count the util­ity of already ex­tant peo­ple? Also, isn’t this in­con­sis­tent over time (some­one who used this as their eth­i­cal frame­work could pre­dict spe­cific dis­con­ti­nu­ities in their fu­ture val­ues)?

• I sup­pose you could say that it’s equiv­a­lent to “to­tal util­i­tar­i­anism that only takes into ac­count the util­ity of already ex­tant peo­ple, and only takes into ac­count their cur­rent util­ity func­tion [at the time the de­ci­sion is made] and not their fu­ture util­ity func­tion”.

(Un­der mere “to­tal util­i­tar­i­anism that only takes into ac­count the util­ity of already ex­tant peo­ple”, the gov­ern­ment could wire­head its con­stituency.)

Yes, this is ex­plic­itly in­con­sis­tent over time. I ac­tu­ally would ar­gue that the util­ity func­tion for any group of peo­ple will be in­con­sis­tent over time (as prefer­ences evolve, new peo­ple join, and old peo­ple leave) and any de­ci­sion-mak­ing frame­work needs to be able to han­dle that in­con­sis­tency in­tel­li­gently. Failure to han­dle that in­con­sis­tency in­tel­li­gently is what leads to the Repug­nant Con­clu­sions.

• We have the no­tion of to­tal util­i­tar­i­anism, in which the gov­ern­ment tries to max­i­mize the sum of the util­ity val­ues of each of its con­stituents. This leads to “re­pug­nant con­clu­sion” is­sues in which the gov­ern­ment gen­er­ates new con­stituents at a high rate un­til all of them are mis­er­able.

We also have the no­tion of av­er­age util­i­tar­i­anism, in which the gov­ern­ment tries to max­i­mize the av­er­age of the util­ity val­ues of each of its con­stituents. This leads to is­sues—I’m not sure if there’s a snappy name—where the gov­ern­ment tries to kill off the least happy con­stituents so as to bring the av­er­age up.

Not quite. If our so­cietal util­ity func­tion S(n) = n x U(n), where n is the num­ber of peo­ple in the so­ciety, and U(n) is the av­er­age util­ity gain per year per per­son (which de­creases as n in­creases, for high n, be­cause of over crowd­ing and re­source scarcity), then you don’t max­imise S(n) by just in­creas­ing n un­til U(n) reaches 0. There will be an op­ti­mum n, for which 1 x U(n+1) - the util­ity from yet one more cit­i­zen, is less than n x ( U(n) - U(n+1) ) - the loss of util­ity by the other n cit­i­zens from adding that per­son.

• It might be use­ful to dis­t­in­guish be­tween the ac­tual to­tal util­ity ex­pe­rienced so far, and the es­ti­mates of that which can be worked out from var­i­ous view points.

Sup­pose we break it down by week. If dur­ing the first week of March 2014, Bob finds util­ity (eg plea­sure) in watch­ing movies, col­lect­ing stamps, and in own­ing stamp col­lec­tions, and in hav­ing watched movies (4 differ­ent things), then you’d mul­ti­ply the du­ra­tion (1 week) by the rate at which those things add to his util­ity ex­pe­rienced to get how much you add to his to­tal life­time util­ity ex­pe­rienced.

If, dur­ing the sec­ond week of March, a fire de­stroys his stamp col­lec­tion, that wouldn’t re­duce his life­time to­tal. What it would do is re­duce the rate at which he added to that to­tal dur­ing the fol­low­ing weeks.

• Now let’s take a differ­ent ex­am­ple. Sup­pose there is a painter whose only con­cern is their rep­u­ta­tion upon their death, as mea­sured by the mon­e­tary value of the paint­ings they put up for one fi­nal auc­tion. Paint­ing gives them no joy. Finish­ing a paint­ing doesn’t in­crease their util­ity, only the ex­pected amount of util­ity that they will reap at some fu­ture date.

If, be­fore they died, a fire de­stroyed the ware­house hold­ing the paint­ings they were about to auc­tion off, then they would ac­count the net util­ity ex­pe­rienced dur­ing their life as zero. Hav­ing spent years with own­ing lots of paint­ings, and hav­ing had a high ex­pec­ta­tion of gain­ing fu­ture util­ity dur­ing that time, wouldn’t have added any­thing to their ac­tual to­tal util­ity over those years.

How is that af­fected by the pos­si­bil­ity of the painter chang­ing their util­ity func­tion?

If they later de­cide that there is util­ity to be ex­pe­rienced by weeks spent im­prov­ing their skill at paint­ing (by means of paint­ing pic­tures, even if those pic­tures are de­stroyed be­fore ever be­ing seen or sold), does that retroac­tively change the to­tal util­ity added dur­ing the pre­vi­ous years of their life?

I’d say no.

Either util­ity ex­pe­rienced is real, or it is not. If it is real, then a change in the fu­ture can­not af­fect the past. It can af­fect the es­ti­mate you are mak­ing now of the quan­tity in the past, just as an im­prove­ment in telescope tech­nol­ogy might af­fect the es­ti­mate a mod­ern day sci­en­tist might make about the quan­tity of ex­plo­sive force of a nova that hap­pened 1 mil­lion years ago, but it can’t af­fect the quan­tity it­self, just as a change to mod­ern telescopes can’t ac­tu­ally go back in time to al­ter the nova it­self.

• “In­stead, ev­ery time you ar­rive at a de­ci­sion point, eval­u­ate what ac­tion to take by check­ing the util­ity of your con­stituents from each ac­tion. I pro­pose that we call this “delta util­i­tar­i­anism”, be­cause it isn’t look­ing at the to­tal or the av­er­age, just at the delta in util­ity from each ac­tion.”

Per­haps we could call it “marginal util­ity.”