Utilons vs. Hedons

Re­lated to: Would Your Real Prefer­ences Please Stand Up?

I have to ad­mit, there are a lot of peo­ple I don’t care about. Com­fortably over six billion, I would bet. It’s not that I’m a cal­lous per­son; I sim­ply don’t know that many peo­ple, and even if I did I hardly have time to pro­cess that much in­for­ma­tion. Every day hun­dreds of mil­lions of in­cred­ibly won­der­ful and ter­rible things hap­pen to peo­ple out there, and if they didn’t, I wouldn’t even know it.

On the other hand, my pro­fes­sional goals deal with eco­nomics, policy, and im­prov­ing de­ci­sion mak­ing for the pur­pose of mak­ing mil­lions of peo­ple I’ll never meet hap­pier. Their hap­piness does not af­fect my ex­pe­rience of life one bit, but I be­lieve it’s a good thing and I plan to work hard to figure out how to cre­ate more hap­piness.

This un­der­scores an es­sen­tial dis­tinc­tion in un­der­stand­ing any util­i­tar­ian view­point: the differ­ence be­tween ex­pe­rience and val­ues. One can value un­weighted to­tal util­ity. One can­not ex­pe­rience un­weighted to­tal util­ity. It will always hurt more if a friend or loved one dies than if some­one you never knew in a place you never heard of dies. I would be truly amazed to meet some­one who is an ex­cep­tion to this rule and is not an ab­solute stoic. Your ex­pe­ri­en­tial util­ity func­tion may have co­effi­cients for other peo­ple’s hap­piness (or at least your per­cep­tion of such), but there’s no way it has an iden­ti­cal co­effi­cient for ev­ery­one ev­ery­where, un­less that co­effi­cient is zero. On the other hand, you prob­a­bly care in an ab­stract way about whether peo­ple you don’t know die or are en­slaved or im­pris­oned, and may even con­tribute some money or effort to pre­vent such from hap­pen­ing. I’m go­ing to use “utilons” to re­fer to value util­ity units and “he­dons” to re­fer to ex­pe­ri­en­tial util­ity units; I’ll demon­strate that this is a mean­ingful dis­tinc­tion shortly, and that we value utilons over he­dons ex­plains much of our moral rea­son­ing ap­pear­ing to fail.

Let’s try a hy­po­thet­i­cal to illus­trate the differ­ence be­tween ex­pe­ri­en­tial and value util­ity. An em­ployee of Omega, LLC,1 offers you a deal to ab­solutely dou­ble your he­dons but kill five peo­ple in, say, ru­ral China, then wipe your mem­ory of the deal. Do you take it? What about five hun­dred? Five hun­dred thou­sand?

I can’t speak for you, so I’ll go through my eval­u­a­tion of this deal and hope it gen­er­al­izes rea­son­ably well. I don’t take it at any of these val­ues. There’s no clear he­do­nis­tic ex­pla­na­tion for this—af­ter all, I for­get it hap­pened. It would be ab­surd to say that the di­su­til­ity I ex­pe­rience be­tween en­ter­ing the agree­ment and hav­ing my mem­ory wiped is so tremen­dous as to out­weigh ev­ery­thing I will ex­pe­rience for the rest of my life (par­tic­u­larly since I im­me­di­ately for­get this di­su­til­ity), and this is the only way I can see my re­jec­tion could be ex­plained with he­dons. In fact, even if the mem­ory wipe weren’t part of the deal, I doubt the act of hav­ing a few peo­ple kil­led would re­ally cause me more dis­plea­sure than dou­bling my fu­ture he­dons would yield; I’d bet more than five peo­ple have died in ru­ral China as I’ve writ­ten this post, and it hasn’t up­set me in the slight­est.

The rea­son I don’t take the deal is my val­ues; I be­lieve it’s wrong to kill ran­dom peo­ple to im­prove my own hap­piness. If I knew that they were peo­ple who sincerely wanted to be dead or that they were, say, se­rial kil­lers, my de­ci­sion would be differ­ent, even though my he­do­nic ex­pe­rience would prob­a­bly not be that differ­ent. If I knew that mil­lions of peo­ple in China would be sig­nifi­cantly hap­pier as a re­sult, as well, then there’s a good chance I’d take the deal even if it didn’t help me. I seem to be max­i­miz­ing utilons and not he­dons, and I think most peo­ple would do the same.

Also, as an­other ex­am­ple so ob­vi­ous that I feel like it’s cheat­ing, if most peo­ple read the head­line “100 work­ers die in Beijing fac­tory fire” or “1000 work­ers die in Beijing fac­tory fire,” they will not feel ten times the he­do­nic blow, even if they live in Beijing. That it is ten times worse is mea­sured in our val­ues, not our ex­pe­riences; these val­ues are cor­rect, since there are roughly ten times as many peo­ple who have se­ri­ously suffered from the fire, but if we’re talk­ing about peo­ple’s he­dons, no in­di­vi­d­ual suffers ten times as much.

In gen­eral, peo­ple value utilons much more than he­dons. Drugs be­ing ille­gal are an illus­tra­tion of this. Ar­gu­ments for (and against) drug le­gal­iza­tion are an even bet­ter illus­tra­tion of this. Such ar­gu­ments usu­ally in­volve weak­en­ing or­ga­nized crime, in­creas­ing safety, re­duc­ing crim­i­nal be­havi­our, re­duc­ing ex­pen­di­tures on pris­ons, im­prov­ing treat­ment for ad­dicts, and im­prov­ing similar val­ues. “Lots of peo­ple who want to will get re­ally, re­ally high” is only very rarely touted as a ma­jor ar­gu­ment, even though the net he­do­nic value of drug le­gal­iza­tion would prob­a­bly be mas­sive (just as the he­do­nic cost of pro­hi­bi­tion in the 20′s was clearly mas­sive).

As a prac­ti­cal mat­ter, this is im­por­tant be­cause many peo­ple do things pre­cisely be­cause they are im­por­tant in their ab­stract value sys­tem, even if they re­sult in lit­tle or no he­do­nic pay­off. This, I be­lieve, is an ex­cel­lent ex­pla­na­tion of why suc­cess is no guaran­tee of hap­piness; suc­cess is con­ducive to get­ting he­dons, but it also tends to cost a lot of he­dons, so there is lit­tle guaran­tee that earned wealth will be a net pos­i­tive (and, with an­chor­ing, he­dons will get a lot more ex­pen­sive than they are for the less suc­cess­ful). On the other hand, earn­ing wealth (or sta­tus) is a very com­mon value, so peo­ple pur­sue it ir­re­spec­tive of its he­do­nis­tic pay­off.

It may be con­ve­nient to ar­gue that the he­do­nis­tic pay­offs must bal­ance out, but this does not make it the case in re­al­ity. Some peo­ple work hard on as­sign­ments that are prac­ti­cally mean­ingless to their long-term hap­piness be­cause they be­lieve they should, not be­cause they have any delu­sions about their he­do­nis­tic pay­off. To say, “If you did X in­stead of Y be­cause you ‘value’ X, then the he­do­nis­tic cost of break­ing your val­ues must ex­ceed Y-X,” is to win an ar­gu­ment by defi­ni­tion; you have to ac­tu­ally figure out the val­ues and see if that’s true. If it’s not, then I’m not a he­don-max­i­mizer. You can’t then as­sert that I’m an “ir­ra­tional” he­don max­i­mizer un­less you can make some very clear dis­tinc­tion be­tween “ir­ra­tionally max­i­miz­ing he­dons” and “max­i­miz­ing some­thing other than he­dons.”

This di­chotomy also de­scribes akra­sia fairly well, though I’d hes­i­tate to say it truly ex­plains it. Akra­sia is what hap­pens when we max­i­mize our he­dons at the ex­pense of our utilons. We play video games/​watch TV/​post on blogs be­cause it feels good, and we feel bad about it be­cause, first, “it feels good” is not rec­og­nized as a ma­jor pos­i­tive value in most of our utilon-func­tions, and sec­ond, be­cause do­ing our home­work is rec­og­nized as a ma­jor pos­i­tive value in our utilon func­tions. The ex­pe­rience makes us pro­cras­ti­nate and our val­ues make us feel guilty about it. Just as we should not need­lessly mul­ti­ply causes, nei­ther should we er­ro­neously merge them.

Fur­ther­more, this may cause our in­tu­ition to se­ri­ously in­terfere with util­ity-based hy­po­thet­i­cals, such as these. Ba­si­cally, you choose to draw cards, one at a time, that have a 10% chance of kil­ling you and a 90% chance of dou­bling your util­ity. Log­i­cally, if your cur­rent util­ity is pos­i­tive and you as­sign a util­ity of zero2 (or greater) to your death (which makes sense in he­dons, but not nec­es­sar­ily in utilons), you should draw cards un­til you die. The prob­lem of course be­ing that if you draw a card a sec­ond, you will be dead in a minute with P= ~.9982, and dead in an hour with P=~1-1.88*10-165.

There’s a big­ger prob­lem that causes our in­tu­ition to re­ject this hy­po­thet­i­cal as “just wrong:” it leads to ma­jor er­rors in both utilons and he­dons. The mind can­not com­pre­hend un­limited dou­bling of he­dons. I doubt you can imag­ine be­ing 260 times as happy as you are now; in­deed, I doubt it is mean­ingfully pos­si­ble to be so happy. As for utilons, most peo­ple as­sign a much greater value to “not dy­ing,” com­pared with hav­ing more he­dons. Thus, a he­do­nic read­ing of the prob­lem re­turns an er­ror be­cause re­peated dou­bling feels mean­ingless, and a utilon read­ing (may) re­turn an er­ror if we as­sign a sig­nifi­cant enough nega­tive value to death. But if we look at it purely in terms of num­bers, we end up very, very happy right up un­til we end up very, very dead.

Any use­ful util­i­tar­ian calcu­lus need take into ac­count that he­do­nic util­ity is, for most peo­ple, in­com­plete. Value util­ity is of­ten a ma­jor mo­ti­vat­ing fac­tor, and it need not trans­late perfectly into he­do­nic terms. In­cor­po­rat­ing value util­ity seems nec­es­sary to have a map of hu­man hap­piness that ac­tu­ally matches the ter­ri­tory. It also may be good that it can be eas­ier to change val­ues than it is to change he­do­nic ex­pe­riences. But as­sum­ing peo­ple max­i­mize he­dons, and then as­sum­ing quan­ti­ta­tive val­ues that con­form to this as­sump­tion, proves noth­ing about what ac­tu­ally mo­ti­vates peo­ple and risks se­ri­ous sys­tem­atic er­ror in fur­ther­ing hu­man hap­piness.

We know that our ex­pe­ri­en­tial util­ity can­not en­com­pass all that re­ally mat­ters to us, so we have a value sys­tem that we place above it pre­cisely to avoid risk­ing de­stroy­ing the whole world to make our­selves marginally hap­pier, or to avoid pur­su­ing any other means of gain­ing hap­piness that car­ries tremen­dous po­ten­tial ex­pense.

1- Ap­par­ently Omega has started a firm due to ex­ces­sive de­mand for its ser­vices, or to avoid hav­ing to talk to me.

2- This as­sump­tion is rather prob­le­matic, though zero seems to be the only cor­rect value of death in he­dons. But imag­ine that you just won the lot­tery (with­out buy­ing a ticket, pre­sum­ably) and got se­lected as the most im­por­tant, in­tel­li­gent, at­trac­tive per­son in what­ever field or so­cial cir­cle you care most about. How bad would it be to drop dead? Now, imag­ine you just got cap­tured by some psy­chopath and are go­ing to be tor­tured for years un­til you even­tu­ally die. How bad would it be to drop dead? As­sign­ing zero (or the same value, pe­riod) to both out­comes seems wrong. I re­al­ize that you can say that death in one is nega­tive and in the other is pos­i­tive rel­a­tive to ex­pected util­ity, but still, the value of death does not seem iden­ti­cal, so I’m sus­pi­cious of as­sign­ing it the same value in both cases. I re­al­ize this is hand-wavy; I think I’d need a sep­a­rate post to ad­dress this is­sue prop­erly.