Pinpointing Utility

Fol­low­ing Mo­ral­ity is Awe­some. Re­lated: Log­i­cal Pin­point­ing, VNM.

The eter­nal ques­tion, with a quan­ti­ta­tive edge: A wiz­ard has turned you into a whale, how awe­some is this?

“10.3 Awe­somes”

Med­i­tate on this: What does that mean? Does that mean it’s de­sir­able? What does that tell us about how awe­some it is to be turned into a whale? Ex­plain. Take a crack at it for real. What does it mean for some­thing to be la­beled as a cer­tain amount of “awe­some” or “good” or “util­ity”?

What is This Utility Stuff?

Most of agree that the VNM ax­ioms are rea­son­able, and that they im­ply that we should be max­i­miz­ing this stuff called “ex­pected util­ity”. We know that ex­pec­ta­tion is just a weighted av­er­age, but what’s this “util­ity” stuff?

Well, to start with, it’s a log­i­cal con­cept, which means we need to pin it down with the ax­ioms that define it. For the mo­ment, I’m go­ing to con­flate util­ity and ex­pected util­ity for sim­plic­ity’s sake. Bear with me. Here are the con­di­tions that are nec­es­sary and suffi­cient to be talk­ing about util­ity:

  1. Utility can be rep­re­sented as a sin­gle real num­ber.

  2. Each out­come has a util­ity.

  3. The util­ity of a prob­a­bil­ity dis­tri­bu­tion over out­comes is the ex­pected util­ity.

  4. The ac­tion that re­sults in the high­est util­ity is preferred.

  5. No other op­er­a­tions are defined.

I hope that wasn’t too es­o­teric. The rest of this post will be ex­plain­ing the im­pli­ca­tions of those state­ments. Let’s see how they ap­ply to the awe­some­ness of be­ing turned into a whale:

  1. “10.3 Awe­somes” is a real num­ber.

  2. We are talk­ing about the out­come where “A wiz­ard has turned you into a whale”.

  3. There are no other out­comes to ag­gre­gate with, but that’s OK.

  4. There are no ac­tions un­der con­sid­er­a­tion, but that’s OK.

  5. Oh. Not even tak­ing the value?

Note 5 es­pe­cially. You can prob­a­bly look at the num­ber with­out caus­ing trou­ble, but if you try to treat it as mean­ingful for some­thing other than con­di­tion 3 and 4, even ac­ci­den­tally, that’s a type er­ror.

Un­for­tu­nately, you do not have a finicky com­piler that will halt and warn you if you break the rules. In­stead, your er­ror will be silently ig­nored, and you will go on, bliss­fully un­aware that the in­var­i­ants in your de­ci­sion sys­tem no longer pin­point VNM util­ity. (Uh oh.)

Un­shielded Utilities, and Cau­tions for Utility-Users

Let’s imag­ine that util­ities are ra­dioac­tive; If we are care­ful with out con­tain­ment pro­ce­dures, we can safely com­bine and com­pare them, but if we in­ter­act with an un­shielded util­ity, it’s over, we’ve com­mit­ted a type er­ror.

To even get a util­ity to man­i­fest it­self in this plane, we have to do a lit­tle rit­ual. We have to take the ra­tio be­tween two util­ity differ­ences. For ex­am­ple, if we want to get a num­ber for the util­ity of be­ing turned into a whale for a day, we might take the differ­ence be­tween that sce­nario and what we would oth­er­wise ex­pect to do, and then take the ra­tio be­tween that differ­ence and the differ­ence be­tween a nor­mal day and a day where we also get a tasty sand­wich. (Make sure you take the ab­solute value of your unit, or you will re­verse your util­ity func­tion, which is a bad idea.)

So the form that the util­ity of be­ing a whale man­i­fests as might be “500 tasty sand­wiches bet­ter than a nor­mal day”. We have cho­sen “a nor­mal day” for our da­tum, and “tasty sand­wiches” for our units. Of course we could have just as eas­ily cho­sen some­thing else, like “be­ing turned into a whale” as our da­tum, and “or­gasms” for our units. Then it would be “0 or­gasms bet­ter than be­ing turned into a whale”, and a nor­mal day would be “-400 or­gasms from the whale-day”.

You say: “But you shouldn’t define your util­ity like that, be­cause then you are ex­pe­rienc­ing huge di­su­til­ity in the nor­mal case.”

Wrong, and ra­di­a­tion poi­son­ing, and type er­ror. You tried to “ex­pe­rience” a util­ity, which is not in the defined op­er­a­tions. Also, you looked di­rectly at the value of an un­shielded util­ity (also known as nu­merol­ogy).

We sum­moned the util­ities into the real num­bers, but they are still util­ities, and we still can only com­pare and ag­gre­gate them. The sum­mon­ing only gives us a num­ber that we can nu­mer­i­cally do those op­er­a­tions on, which is why we did it. This is the same situ­a­tion as time, po­si­tion, ve­loc­ity, etc, where we have to se­lect units and da­tums to get ac­tual quan­tities that math­e­mat­i­cally be­have like their ideal coun­ter­parts.

Some­times peo­ple re­fer to this rel­a­tivity of util­ities as “pos­i­tive af­fine struc­ture” or “in­var­i­ant up to a scale and shift”, which con­fuses me by mak­ing me think of an equiv­alence class of util­ity func­tions with num­bers com­ing out, which don’t agree on the ac­tual num­bers, but can be made to agree with a lin­ear trans­form, rather than mak­ing me think of a util­ity func­tion as a space I can mea­sure dis­tances in. I’m an en­g­ineer, not a math­e­mat­i­cian, so I find it much more in­tu­itive and less con­fus­ing to think of it in terms of units and da­tums, even though it’s ba­si­cally the same thing. This way, the util­ity func­tion can scale and shift all it wants, and my num­bers will always be the same. Equiv­a­lently, all agents that share my prefer­ences will always agree that a day as a whale is “400 or­gasms bet­ter than a nor­mal day”, even if they use an­other ba­sis them­selves.

So what does it mean that be­ing a whale for a day is 400 or­gasms bet­ter than a nor­mal day? Does it mean I would pre­fer 400 or­gasms to a day as a whale? Nope. Or­gasms don’t add up like that; I’d prob­a­bly be quite tired of it by 15. (re­mem­ber that “or­gasms” were defined as the differ­ence be­tween a day with­out an or­gasm and a day with one, not as the util­ity of a marginal or­gasm in gen­eral.) What it means is that I’d be in­differ­ent be­tween a nor­mal day with a 1400 chance of be­ing a whale, and a nor­mal day with guaran­teed ex­tra or­gasm.

That is, util­ities are fun­da­men­tally about how your prefer­ences re­act to un­cer­tainty. For ex­am­ple, You don’t have to think that each marginal year of life is as valuable as the last, if you don’t think you should take a gam­ble that will dou­ble your re­main­ing lifes­pan with 60% cer­tainty and kill you oth­er­wise. After all, all that such a util­ity as­sign­ment even means is that you would take such a gam­ble. In the words of VNM:

We have prac­ti­cally defined nu­mer­i­cal util­ity as be­ing that thing for which the calcu­lus of math­e­mat­i­cal ex­pec­ta­tions is le­gi­t­i­mate.

But sup­pose there are very good ar­gu­ments that have noth­ing to do with un­cer­tainty for why you should value each marginal life-year as much as the last. What then?

Well, “what then” is that we spend a few weeks in the hos­pi­tal dy­ing of ra­di­a­tion poi­son­ing, be­cause we tried to in­ter­act with an un­shielded util­ity again (util­ities are ra­dioac­tive, re­mem­ber? The spe­cific er­ror is that we tried to ma­nipu­late the util­ity func­tion with some­thing other than com­par­i­son and ag­gre­ga­tion. Touch­ing a util­ity di­rectly is just as much an er­ror as ob­serv­ing it di­rectly.

But if the only way to define your util­ity func­tion is with thought ex­per­i­ments about what gam­bles you would take, and the only use for it is de­cid­ing what gam­bles you would take, then isn’t it do­ing no work as a con­cept?

The an­swer is no, but this is a good ques­tion be­cause it gets us closer to what ex­actly this util­ity func­tion stuff is about. The util­ity of util­ity is that defin­ing how you would be­have in one gam­ble puts a con­straint on how you would be­have in some other re­lated gam­bles. As with all math, we put in some known facts, and then use the rules to de­rive some in­ter­est­ing but un­known facts.

For ex­am­ple, if we have de­cided that we would be in­differ­ent be­tween a tasty sand­wich and a 1500 chance of be­ing a whale for to­mor­row, and that we’d be in­differ­ent be­tween a tasty sand­wich and a 30% chance of sun in­stead of the usual rain, then we should also be in­differ­ent be­tween a cer­tain sunny day and a 1150 chance of be­ing a whale.

Mono­lithic­ness and Marginal (In)Dependence

If you are re­ally pay­ing at­ten­tion, you may be a bit con­fused, be­cause it seems to you that money or time or some other con­sum­able re­source can force you to as­sign util­ities even if there is no un­cer­tainty in the sys­tem. That is­sue is com­plex enough to de­serve its own post, so I’d like to de­lay it for now.

Part of the solu­tion is that as we defined them, util­ities are mono­lithic. This is the im­pli­ca­tion of “each out­come has a util­ity”. What this means is that you can’t add and re­com­bine util­ities by de­com­pos­ing and re­com­bin­ing out­comes. Be­ing spe­cific, you can’t take a marginal whale from one out­come and sta­ple it onto an­other out­come, and ex­pect the marginal util­ities to be the same. For ex­am­ple, maybe the other out­come has no oceans for your marginal whale.

For a big­ger ex­am­ple, what we have said so far about the rel­a­tive value of sand­wiches and sunny days and whale-days does not nec­es­sar­ily im­ply that we are in­differ­ent be­tween a 1250 chance of be­ing a whale and any of the fol­low­ing:

  • A day with two tasty sand­wiches. (Re­mem­ber that a tasty sand­wich was defined as a spe­cific differ­ence, not a marginal sand­wich in gen­eral, which has no rea­son to have a con­sis­tent marginal value.)

  • A day with a 30% chance of sun and a cer­tain tasty sand­wich. (Maybe the tasty sand­wich and the sun at the same time is hor­rify­ing for some rea­son. Maybe some­one drilled into you as a child that “bread in the sun” was bad bad bad.)

  • etc. You get the idea. Utilities are mono­lithic and fun­da­men­tally as­so­ci­ated with par­tic­u­lar out­comes, not marginal out­come-pieces.

How­ever, as in prob­a­bil­ity the­ory, where each pos­si­ble out­come tech­ni­cally has its very own prob­a­bil­ity, in prac­tice it is use­ful to talk about a con­cept of in­de­pen­dence.

So for ex­am­ple, even though the ax­ioms don’t guaran­tee in gen­eral that it will ever be the case, it may work out in prac­tice that given some con­di­tions, like there be­ing noth­ing spe­cial about bread in the sun, and my hap­piness not be­ing near sat­u­ra­tion, the util­ity of a marginal tasty sand­wich is in­de­pen­dent of a marginal sunny day, mean­ing that sun+sand­wich is as much bet­ter than just sun as just a sand­wich is bet­ter than baseline, ul­ti­mately mean­ing that I am in­differ­ent be­tween {50%: sunny+sand­wich; 50% baseline} and {50%: sunny; 50%: sand­wich}, and other such bets. (We need a bet­ter solu­tion for ren­der­ing prob­a­bil­ity dis­tri­bu­tions in prose).

No­tice that the in­de­pen­dence of marginal util­ities can de­pend on con­di­tions and that in­de­pen­dence is with re­spect to some other vari­able, not a gen­eral prop­erty. The util­ity of a marginal tasty sand­wich is not in­de­pen­dent of whether I am hun­gry, for ex­am­ple.

There is a lot more to this in­de­pen­dence thing (and lin­ear­ity, and risk aver­sion, and so on), so it de­serves its own post. For now, the point is that the mono­lithic­ness thing is fun­da­men­tal, but in prac­tice we can some­times look in­side the black box and talk about in­de­pen­dent marginal util­ities.

Di­men­sion­less Utility

I liked this quote from the com­ments of Mo­ral­ity is Awe­some:

Mo­ral­ity needs a con­cept of awful­ness as well as awe­some­ness. In the depths of hell, good things are not an op­tion and there­fore not a con­sid­er­a­tion, but there are still choices to be made.

Let’s de­velop that sec­ond sen­tence a bit more. If all your op­tions suck, what do you do? You still have to choose. So let’s imag­ine we are in the depths of hell and see what our the­o­ries have to say about it:

Day 78045. Satan has pre­sented me with three op­tions:

  1. Go on a date with Satan Him­self. This will in­volve ro­man­ti­cally tor­tur­ing souls to­gether, sub­tly steer­ing mor­tals to­wards self-de­struc­tion, watch­ing peo­ple get thrown into the lake of fire, and some very un­safe, very non­con­sen­sual sex with the Ad­ver­sary him­self.

  2. Paper­clip the uni­verse.

  3. Satan’s court wiz­ard will turn me into a whale and re­lease me into the lake of fire, to roast slowly for the next month, kept al­ive by twisted black magic.

Wat do?

They all seem pretty bad, but “pretty bad” is not a util­ity. We could quan­tify pa­per­clip­ping as a cou­ple hun­dred billion lives lost. Be­ing a whale in the lake of fire would be awful, but a bounded sort of awful. A month of end­less hor­rible tor­ture. The “date” is hav­ing to be on the giv­ing end of what would more or less hap­pen any­way, and then get­ting sav­aged by Satan. Still none of these are util­ities.

Com­ing up with ac­tual util­ity num­bers for these in terms of tasty sand­wiches and nor­mal days is hard; it would be like mea­sur­ing the microkelvin tem­per­a­tures of your physics ex­per­i­ment with a Fahren­heit kitchen ther­mome­ter; in prin­ci­ple it might work, but it isn’t the best tool for the job. In­stead, we’ll use a differ­ent scheme this time.

Eng­ineers (and physi­cists?) some­times trans­form prob­lems into a di­men­sion­less form that re­moves all re­dun­dant in­for­ma­tion from the prob­lem. For ex­am­ple, for a heat con­duc­tion prob­lem, we might define an iso­mor­phic di­men­sion­less tem­per­a­ture so that real tem­per­a­tures be­tween 78 and 305 C be­come di­men­sion­less tem­per­a­tures be­tween 0 and 1. Trans­form­ing a prob­lem into di­men­sion­less form is nearly always helpful, of­ten in re­ally sur­pris­ing ways. We can do this with util­ity too.

Back to depths of hell. The date with Satan is clearly the best op­tion, so it gets di­men­sion­less util­ity 1. The pa­per­clip­per gets 0. On that scale, I’d say roast­ing in the lake of fire is like 0.999 or so, but that might just be scope in­sen­si­tivity. We’ll take it for now.

The ad­van­tages with this ap­proach are:

  1. The num­bers are more in­tu­itive. −5e12 QALYs, −1 QALY, and −50 QALYs from a nor­mal day, or the equiv­a­lent in tasty sand­wiches, just doesn’t have the same feel­ing of clar­ity as 0, 1 and .999. (For me at least. And yes I know those num­bers don’t quite match.)

  2. Not hav­ing to re­late the prob­lem quan­tities to far-away da­tums or dras­ti­cally mis­ap­pro­pri­ate units (tasty sand­wiches for this prob­lem) makes the num­bers eas­ier and more di­rect to come up with. Also we have to come up with less of them. The prob­lem is self-con­tained.

  3. If defined right, the con­nec­tion be­tween prob­a­bil­ity and util­ity be­comes ex­tra-clear. For ex­am­ple: What chance be­tween a Satan-date and a pa­per­clip­per would make me in­differ­ent with a lake-of-fire-whale-month? 0.999! Unitless magic!

  4. All con­fus­ing re­dun­dant in­for­ma­tion (like nega­tive signs) are re­moved, which makes it harder to ac­ci­den­tally do nu­merol­ogy or com­mit a type er­ror.

  5. All re­dun­dant in­for­ma­tion is re­moved, which means you find many more similar­i­ties be­tween prob­lems. The value of this in gen­eral can­not be un­der­stated. Just look at the gen­er­al­iza­tions made about Reynolds num­ber! “[vor­tex shed­ding] oc­curs for any fluid, size, and speed, pro­vided that Re be­tween ~40 and 10^3”. What! You can just say that in gen­eral? Magic! I haven’t ac­tu­ally done enough util­ity prob­lems to know that we’ll find stuff like that but I trust di­men­sion­less form.

Any­ways, it seems that go­ing on that date is what I ought to do. So did we need a con­cept of awful­ness? Did it mat­ter that all the op­tions sucked? Nope; the de­ci­sion was iso­mor­phic in ev­ery way to choos­ing lunch be­tween a BLT, a turkey club, and a hand­ful of dirt.

There are some as­sump­tions in that lunch bit, and it’s worth dis­cussing. It seems coun­ter­in­tu­itive or even wrong, to say that your de­ci­sion-pro­cess faced with lunch should be the same as when faced with a de­ci­sion in in­volv­ing tor­ture, rape, and pa­per­clips. The lat­ter seems some­how more im­por­tant. Where does that come from? Is it right?

This may de­serve a big­ger dis­cus­sion, but ba­si­cally, if you have finite re­sources (thought-power, money, en­ergy, stress) that are con­served or even re­lated across de­ci­sions, you get cou­pling of “differ­ent” de­ci­sions in a way that we didn’t have here. Your in­tu­itions are cal­ibrated for that case. Once you have de­cou­pled the de­ci­sion by com­ing up with the ac­tual can­di­date op­tions. The depths-of-hell de­ci­sion and the lunch de­ci­sion re­ally are to­tally iso­mor­phic. I’ll prob­a­bly ad­dress this prop­erly later, if I dis­cuss in­stru­men­tal util­ity of re­sources.

Any­ways, once you put the prob­lem in di­men­sion­less form, a lot of de­ci­sions that seemed very differ­ent be­come al­most the same, and a lot of de­tails that seemed im­por­tant or con­fus­ing just dis­ap­pear. Bask in the clar­ify­ing power of a good ab­strac­tion.

Utility is Personal

So far we haven’t touched the is­sue of in­ter­per­sonal util­ity. That’s be­cause that topic isn’t ac­tu­ally about VNM util­ity! There was noth­ing in the ax­ioms above about there be­ing a util­ity for each {per­son, out­come} pair, only for each out­come.

It turns out that if you try to com­pare util­ities be­tween agents, you have to touch un­shielded util­ities, which means you get ra­di­a­tion poi­son­ing and go to type-the­ory hell. Don’t try it.

And yet, it seems like we ought to care about what oth­ers pre­fer, and not just our own self-in­ter­est. But it seems like that in­side the util­ity func­tion, in moral philos­o­phy, not out here in de­ci­sion the­ory.

VNM has noth­ing to say on the is­sue of util­i­tar­i­anism be­sides the usual prefer­ence-un­cer­tainty in­ter­ac­tion con­straints, be­cause VNM is about the prefer­ences of a sin­gle agent. If that sin­gle agent cares about the prefer­ences of other agents, that goes in­side the util­ity func­tion.

Con­versely, be­cause VNM util­ity is out here, ax­iomized for the sovereign prefer­ences of a sin­gle agent, we don’t much ex­pect it to show up in there, in a dis­cus­sion if util­i­tar­ian prefer­ence ag­gre­ga­tion. In fact, if we do en­counter it in there, it’s prob­a­bly a sign of a failed ab­strac­tion.

Liv­ing with Utility

Let’s go back to how much work util­ity does as a con­cept. I’ve spent the last few sec­tions ham­mer­ing on the work that util­ity does not do, so you may ask “It’s nice that util­ity the­ory can con­strain our bets a bit, but do I re­ally have to define my util­ity func­tion by pin­ning down the rel­a­tive util­ities of ev­ery sin­gle pos­si­ble out­come?”.

Sort of. You can take short­cuts. We can, for ex­am­ple, won­der all at once whether, for all pos­si­ble wor­lds where such is pos­si­ble, you are in­differ­ent be­tween sav­ing n lives and {50%: sav­ing 2*n; 50%: sav­ing 0}.

If that seems rea­son­able and doesn’t break in any case you can think of, you might keep it around as heuris­tic in your ad-hoc util­ity func­tion. But then maybe you find a coun­terex­am­ple where you don’t ac­tu­ally pre­fer the im­pli­ca­tions of such a rule. So you have to re­fine it a bit to re­spond to this new ar­gu­ment. This is OK; the math doesn’t want you to do things you don’t want to.

So you can save a lot of small thought ex­per­i­ments by do­ing the right big ones, like above, but the more sweep­ing of a gen­er­al­iza­tion you make, the more prob­a­ble it is that it con­tains an er­ror. In fact, con­ceptspace is pretty huge, so try­ing to con­struct a util­ity func­tion with­out in­side in­for­ma­tion is go­ing to take a while no mat­ter how you ap­proach it. Some­thing like dis­assem­bling the al­gorithms that pro­duce your in­tu­itions would be much more effi­cient, but that’s prob­a­bly be­yond sci­ence right now.

In any case, in the cur­rent term be­fore we figure out how to for­mally rea­son the whole thing out in ad­vance, we have to get by with some good heuris­tics and our cur­rent in­tu­itions with a pinch of last minute san­ity check­ing against the VNM rules. Ugly, but bet­ter than noth­ing.

The whole pro­ject is made quite a bit harder in that we are not just try­ing to re­con­struct an ex­plicit util­ity func­tion from re­vealed prefer­ence; we are try­ing to con­struct a util­ity func­tion for a sys­tem that doesn’t even cur­rently have con­sis­tent prefer­ences.

At some point, ei­ther the con­cept of util­ity isn’t re­ally im­prov­ing our de­ci­sions, or it will come in con­flict with our in­tu­itive prefer­ences. In some cases it’s ob­vi­ous how to re­solve the con­flict, in oth­ers, not so much.

But if VNM con­tra­dicts our cur­rent prefer­ences, why do we think it’s a good idea at all? Surely it’s not wise to be tam­per­ing with our very val­ues?

The rea­son we like VNM is that we have a strong meta-in­tu­ition that our prefer­ences ought to be in­ter­nally con­sis­tent, and VNM seems to be the only way to satisfy that. But it’s good to re­mem­ber that this is just an­other in­tu­ition, to be weighed against the rest. Are we iron­ing out garbage in­con­sis­ten­cies, or los­ing valuable in­for­ma­tion?

At this point I’m dan­ger­ously out of my depth. As far as I can tell, the great pro­ject of moral philos­o­phy is an adult prob­lem, not suited for mere mor­tals like me. Be­sides, I’ve ram­bled long enough.

Conclusions

What a slog! Let’s re­view:

  • Max­i­mize ex­pected util­ity, where util­ity is just an en­cod­ing of your prefer­ences that en­sures a sane re­ac­tion to un­cer­tainty.

  • Don’t try to do any­thing else with util­ities, or demons may fly out of your nose. This es­pe­cially in­cludes look­ing at the sign or mag­ni­tude, and com­par­ing be­tween agents. I call these things “nu­merol­ogy” or “in­ter­act­ing with an un­shielded util­ity”.

  • The de­fault for util­ities is that util­ities are mono­lithic and in­sep­a­rable from the en­tire out­come they are as­so­ci­ated with. It takes spe­cial struc­ture in your util­ity func­tion to be able to talk about the marginal util­ity of some­thing in­de­pen­dently of par­tic­u­lar out­comes.

  • We have to use the differ­ence-and-ra­tio rit­ual to sum­mon the util­ities into the real num­bers. Record util­ities us­ing ex­plicit units and da­tum, and use di­men­sion­less form for your calcu­la­tions, which will make many things much clearer and more ro­bust.

  • If you use a VNM ba­sis, you don’t need a con­cept of awful­ness, just awe­some­ness.

  • If you want to do philos­o­phy about the shape of your util­ity func­tion, make sure you phrase it in terms of lot­ter­ies, be­cause that’s what util­ity is about.

  • The de­sire to use VNM is just an­other moral in­tu­ition in the great pro­ject of moral philos­o­phy. It is con­ceiv­able that you will have to throw it out if it causes too much trou­ble.

  • VNM says noth­ing about your util­ity func­tion. Con­se­quen­tial­ism, he­do­nism, util­i­tar­i­anism, etc are up to you.