Knightian uncertainty: a rejection of the MMEU rule

Re­cently, I found my­self in a con­ver­sa­tion with some­one ad­vo­cat­ing the use of Knigh­tian un­cer­tainty. He (who I’m anonymiz­ing as Sir Percy) made sug­ges­tions that are use­ful to most bounded rea­son­ers, and which can be in­te­grated into a Bayesian frame­work. He also claimed prefer­ences that de­pend upon his Knigh­tian un­cer­tainty and that he’s not an ex­pected util­ity max­i­mizer. Fur­ther, he claimed that Bayesian rea­son­ing can­not cap­ture his prefer­ences. Speci­fi­cally, Sir Percy said he max­i­mizes min­i­mum ex­pected util­ity given his Knigh­tian un­cer­tainty, us­ing what I will re­fer to as the “MMEU rule” to make de­ci­sions.

In my pre­vi­ous post, I showed that Bayesian ex­pected util­ity max­i­miz­ers can ex­hibit be­hav­ior in ac­cor­dance with his prefer­ences. Two such rea­son­ers, Para­noid Perry and Cau­tious Caul, were ex­plored. Th­ese hy­po­thet­i­cal agents demon­strate that it is pos­si­ble for Bayesi­ans to be “am­bi­guity averse”, e.g. to avoid cer­tain types of un­cer­tainty.

But Perry and Caul are un­nat­u­ral agents us­ing strange pri­ors. Is this be­cause we are twist­ing the Bayesian frame­work to rep­re­sent be­hav­ior it is ill-suited to em­u­late? Or does the strangeness of Perry and Caul merely re­veal a strangeness in the MMEU rule?

In this post, I’ll ar­gue the lat­ter: max­i­miza­tion of min­i­mum ex­pected util­ity is not a good de­ci­sion rule, for the same rea­son that Perry and Caul seem ir­ra­tional. My re­jec­tion of the MMEU rule will fol­low from my re­jec­tions of Perry and Caul.

A re­jec­tion of Perry

I un­der­stand the ap­peal of Para­noid Perry, the agent that as­sumes am­bi­guity is re­solved ad­ver­sar­i­ally. Hu­mans of­ten keep the worst-case in mind, avoid big gam­bles, and make sure that they’ll be OK even if ev­ery­thing goes wrong. Perry promises to cap­ture some of this in­tu­itively rea­son­able be­hav­ior.

Un­for­tu­nately, this promise is not kept. From the de­scrip­tion of the MMEU rule, you might think that Perry is for­go­ing high util­ity in the av­er­age case to en­sure mod­er­ate util­ity in the worst case. But this is not the case: Perry will­ingly takes huge gam­bles, so long as those gam­bles are re­solved by “nor­mal” un­cer­tainty rather than “ad­ver­sar­ial” un­cer­tainty.

Allow me to re­it­er­ate: Max­i­miz­ing min­i­mum ex­pected util­ity does not en­sure that you do well in the worst case. It merely se­lects a sin­gle type of un­cer­tainty against which to play defen­sively, and then gam­bles against the rest. To illus­trate, con­sider the fol­low­ing Game of Draw­ers:

There are two boxes, Box 1 and Box 2. Each box has two Draw­ers, drawer A and drawer B. Each drawer con­tains a bet, as fol­lows:

1A. 99% lose $1000, 1% gain $99300 (ex­pec­ta­tion: $3)
1B. Gain 2$

2A: 99% lose $1000, 1% gain $99500 (ex­pec­ta­tion: $5)
2B. Gain $10

You face one of the boxes (you do not know which) and you must choose one of the draw­ers. Which do you choose?

Imag­ine that you have “am­bi­guity” (“Knigh­tian un­cer­tainty”) about which box you face, but that you be­lieve the gam­bles in­side the boxes are fair: this is a setup analo­gous to the Ells­berg urn game, ex­cept that it gives the op­po­site in­tu­ition.

In the Game of Draw­ers, I ex­pect most peo­ple would choose drawer B (and win ei­ther $2 or $10). How­ever, Para­noid Perry (and Cau­tious Caul, and Sir Percy) would choose drawer A.

In the Game of Draw­ers, Perry acts such that 99% of the time it loses $1000.

Wasn’t Para­noid Perry sup­posed to rea­son so that it does well in the worst case? What went wrong?

Para­noid Perry rea­sons that na­ture gets to pick which box it faces, and that na­ture will force Perry into the worse box. Box 1 is strictly worse than Box 2, so Perry ex­pects to face Box 1. And drawer A in Box 1 has higher ex­pected util­ity than drawer B in Box 1, so Perry takes drawer A, a gam­ble that loses Perry $1000 99% of the time!

Perry has no in­cli­na­tion to avoid big gam­bles. Perry isn’t mak­ing sure that the worst sce­nario is ac­cept­able, it’s is max­i­miz­ing ex­pected util­ity in the worst sce­nario that na­ture can pick. Within that least con­ve­nient world, Perry (and Caul, and Sir Percy) take stan­dard Bayesian gam­bles.

Bayesian gam­bles may seem reck­less, but Perry is not the solu­tion: Perry sim­ply di­vides un­cer­tainty into two classes, and treats one of them too defen­sively and the other just as reck­lessly as a Bayesian would. Two flaws don’t make a fea­ture.

Perry as­sumes that some of its un­cer­tainty gets re­solved un­fa­vor­ably by na­ture re­gard­less of whether or not it is. In some do­mains this cap­tures cau­tion, yes. In oth­ers, it’s just a silly waste. As soon as na­ture ac­tu­ally starts act­ing ad­ver­sar­ial, Bayesi­ans be­come cau­tious too. The differ­ence is that Bayesi­ans are not forced to act as if one ar­bi­trary seg­ment of my un­cer­tainty is ad­ver­sar­i­ally re­solved.

Perry be­lieves — with ab­solute cer­tainty, dis­re­gard­ing all ev­i­dence no mat­ter how strong — that Na­ture never cuts any slack.

A re­jec­tion of Caul

I un­der­stand, too, the ap­peal of Cau­tious Caul. Caul rea­sons about mul­ti­ple pos­si­ble wor­ld­parts, and at­tempts to en­sure that the Caul-sliver in the least con­ve­nient wor­ld­part does well enough. In­stead of in­sist­ing that con­ve­nient things can’t hap­pen (like Perry), Caul only cares about the in­con­ve­nient parts. Per­haps this bet­ter cap­tures our in­tu­ition that peo­ple should be less reck­less than Bayesi­ans?

Ex­pected util­ity max­i­miz­ers hap­pily trade util­ity in one branch of the pos­si­bil­ity space for pro­por­tional util­ity in an­other branch, re­gard­less of which branch had higher util­ity in the first place. Some peo­ple have moral in­tu­itions that say this is wrong, and that we should be un­will­ing to trade util­ity away from un­for­tu­nate branches and into for­tu­nate branches.

But this moral in­tu­ition is flawed, in a way that re­veals con­fu­sion both about worst cases and about util­ity.

There’s a huge differ­ence be­tween what the av­er­age per­son con­sider to be a worst case sce­nario (e.g., los­ing the bet) and what a Bayesian con­sid­ers to be the worst case sce­nario (e.g. physics has been ly­ing to you and this is about to turn into the worst pos­si­ble world). Or, to put things glibly, the hu­man ver­sion of worst case is “you lose the bet”, whereas the Bayesian ver­sion of worst case is “the bet was a lie and now ev­ery­body will be tor­tured for­ever”.

You can’t max­i­mize the ac­tual worst case in any rea­son­ably com­plex sys­tem. There are some small sys­tems (say, soft­ware used to con­trol trains) where peo­ple ac­tu­ally do worry about the ab­solute worst case, but upon in­spec­tion, these are con­sis­tent with ex­pected util­ity max­i­miza­tion. A train crash is pretty costly.

And, in fact, ex­pected util­ity max­i­miza­tion can cap­ture cau­tion in gen­eral. We don’t need Cau­tious Caul in or­der to act with ap­pro­pri­ate cau­tion in a Bayesian frame­work. Cau­tion is not im­ple­mented by new de­ci­sion rules, it is im­ple­mented in the con­ver­sion from money (or what­ever) to util­ity. Allow me to illus­trate:

Sup­pose that the fates are about offer me a bet and then roll a thou­sand-sided die. The die seems fair (to me) and my in­for­ma­tion gives me a uniform prob­a­bil­ity dis­tri­bu­tion over val­ues be­tween 1 and 1000: my prob­a­bil­ity mass is about to split into a thou­sand shards of equal mea­sure. Be­fore the die is rol­led, I am given a choice be­tween two op­tions:

  1. No mat­ter what the die rolls, I get $100.

  2. If the die rolls a 1, I pay $898. Other­wise, I get $101.

The sec­ond pack­age re­sults in more ex­pected money (by $1), but I would choose the former. Why? Los­ing $898 is more bad than the ex­tra dol­lars are good. I’m more than happy to burn one ex­pected dol­lar in or­der to avoid the branch where I have to pay $898. In this case, I act cau­tious in the in­tu­itive sense — and I do this as an ex­pected util­ity max­i­mizer. How? Well, con­sider the fol­low­ing bet in­stead:

  1. No mat­ter what the die rolls, I get 100 util­ity.

  2. If the die rolls a 1, I lose 898 util­ity. Other­wise, I gain 101 util­ity.

Now I pick bet 2 in a heart­beat. What changed? Utils have already fac­tored in ev­ery­thing that I care about.

I am risk-neu­tral in utils. If I am loss-averse, then the fact that the los­ing ver­sion of me will ex­pe­rience a sharp pang of frus­tra­tion and per­haps a last­ing de­pres­sion and low­ered pro­duc­tivity has already been fac­tored in to the calcu­la­tion. It’s not like I lose 898 util­ity and then feel bad about it: the bad feel­ings are in­cluded in the num­ber. The fact that all the other ver­sions of me (who get 101 util­ity) will feel a lit­tle sad and re­morse­ful, and will feel a lit­tle frus­trated be­cause the world is un­fair, has already been fac­tored into their util­ity num­bers: it’s not like they see that they got 101 util­ity and then feel re­morse. (Similarly, their re­lief and glee has already been rol­led into the num­bers too.)

The in­tu­ition that we shouldn’t trade util­ity from un­for­tu­nate branches mostly stems from a mi­s­un­der­stand­ing of util­ity. Utility already takes into ac­count your egal­i­tar­i­anism, your prefer­ence curve for dol­lars, and so on. Once these things are ac­counted for, you should trade util­ity from un­for­tu­nate branches into for­tu­nate branches: if this feels bad to you, then you haven’t got your util­ity calcu­la­tions quite right.

Ex­pected util­ity max­i­miz­ers can be cau­tious. They can avoid ru­inous bets. They can play it safe. But all of this be­hav­ior is en­coded in the util­ity num­bers: we don’t need Cau­tious Caul to ex­hibit these prefer­ences.

A re­jec­tion of the MMEU rule

Bets come with stigma. They are tra­di­tion­ally only offered to hu­mans by other hu­mans, and any­one offer­ing a bet is usu­ally ei­ther a con artist or a philos­o­phy pro­fes­sor. “Re­ject bets by de­fault” is a good heuris­tic for most peo­ple.

Ad­vo­cates of Bayesian rea­son­ing talk about ac­cept­ing bets with­out flinch­ing, and that can seem strange. I think this comes down to a fun­da­men­tal mi­s­un­der­stand­ing be­tween col­lo­quial bets and Bayesian bets.

Col­lo­quial bets are offered by skeevy con artists who prob­a­bly know some­thing you don’t. Bayesian bets, on the other hand, arise when­ever the agent must make a de­ci­sion. “Re­ject­ing the bet” is not an op­tion: in­ac­tion is a choice. You have to weigh all available ac­tions (in­clud­ing “stall” or “gather more in­for­ma­tion”) and bet on which one will serve you best.

This mis­match, I think, is re­spon­si­ble for quite a bit of most peo­ple’s dis­com­fort with Bayesian de­ci­sions. That said, Bayesi­ans are also will­ing to make re­ally big gam­bles, gam­bles which look crazy to most peo­ple (who are risk- and loss-averse). Bayesi­ans claim that risk- and loss-aver­sion are bi­ases that should be over­come, and that we should [shut up and mul­ti­ply](http://​​wiki.less­wrong.com/​​wiki/​​Shut_up_and_mul­ti­ply), but this only ex­ac­er­bates the dis­com­fort.

As such, there’s a lot of ap­peal to a de­ci­sion rule that looks out for you in the “worst case” and lets you turn down bets in­stead of mak­ing crazy gam­bles like those Bayesi­ans. The con­cepts of “Knigh­tian un­cer­tainty” and “the MMEU rule” ap­peal to this in­tu­ition.

But the MMEU rule doesn’t work as ad­ver­tised. And fi­nally, I’m in a po­si­tion to ar­tic­u­late my ob­jec­tion, in three parts.


The MMEU rule fails to grant me cau­tion. Max­i­miz­ing min­i­mum ex­pected util­ity does not help me do well in the worst case. It only helps me pick out types of un­cer­tainty that I ex­pect are ad­ver­sar­ial, and max­i­mize my odds given that that un­cer­tainty will be re­solved dis­fa­vor­ably.

Which is a lit­tle bit like as­sum­ing the worst. I can look at the spe­cial un­cer­tainty and say “imag­ine this part is re­solved ad­ver­sar­i­ally, what hap­pens?” But I can’t do this with all my un­cer­tainty, be­cause there’s always some chance that re­al­ity has been ly­ing to me and ev­ery­thing is about to get weird. MMEU man­ages this by limit­ing its cau­tion to an ar­bi­trary sub­set of its pos­si­bil­ity. This is a poor ap­prox­i­ma­tion of cau­tion.

The MMEU rule is not al­low­ing me to rea­son as if the world might turn against me. Rather, it’s forc­ing me to act as if with cer­tainty an ar­bi­trary seg­ment of my un­cer­tainty will be re­solved dis­fa­vor­ably. I’m all for hedg­ing my bets, and I’m very much in fa­vor of play­ing defen­sively when there is an Ad­ver­sary on the field. I can be just as para­noid as Para­noid Perry, given ap­pro­pri­ate rea­son. I’m happy to iden­tify the parts of na­ture that of­ten re­solve dis­fa­vor­ably and hedge the rele­vant bets. But when na­ture proves un­bi­ased, I play the odds. Min­i­mum ex­pected util­ity max­i­miz­ers are forced to play defen­sively for­ever, no mat­ter how hard na­ture tries to do them fa­vors.

More im­por­tantly, though, new de­ci­sion rules aren’t how you cap­ture cau­tion. Re­mem­ber the game of draw­ers? The MMEU rule just doesn’t cor­re­spond to our in­tu­ition sense of cau­tion. The way to avoid ru­inous bets is not to as­sume that na­ture is out to get you. It’s to ad­just the util­ities ap­pro­pri­ately.

Imag­ine the fol­low­ing var­i­ant of Sir Percy’s coin toss:

  1. Pay $1,000,000 to be paid $2,000,001 if the coin came up heads

  2. Pay $1,000,000 to be paid $2,000,001 if the coin came up tails

I would re­fuse both bets in­di­vi­d­u­ally, and ac­cept their con­junc­tion. But not be­cause I can’t as­sign a con­sis­tent cre­dence to the event “the coin came up heads”, that’s ridicu­lous. Not be­cause I fail to at­tempt to max­i­mize util­ity, that’s ridicu­lous too. I re­ject each bet in­di­vi­d­u­ally be­cause dol­lars aren’t util­ity. If you con­vert the dol­lars into my utils, you’ll see that the down­side of ei­ther bet taken in­di­vi­d­u­ally out­weighs its up­side, but that the down­side of both bets taken to­gether is $0 (with an up­side of $1).

So yes, I want to be cau­tious some­times. Yes, I can re­ject bets in­di­vi­d­u­ally that I ac­cept to­gether. I am com­pletely com­fortable re­ject­ing many seem­ingly-good bets. But the MMEU rule is not the thing which grants me these pow­ers.

The MMEU rule fails to grant me hu­mil­ity. One of the origi­nal mo­ti­va­tions of the MMEU rule is that, as hu­mans, we don’t always know what our cre­dence should be (if we were us­ing all our in­for­ma­tion cor­rectly and were able to con­sider more hy­pothe­ses and so on). In the un­bal­anced ten­nis game, we know that our cre­dence for “An­abel wins” should be ei­ther re­ally high or re­ally low, but we don’t know which.

I can, of course, rec­og­nize this fact as a bounded Bayesian rea­soner, with­out any need for a new de­ci­sion rule. It is use­ful for me to rec­og­nize that my cre­dences are fuzzy and con­text de­pen­dent and that they would be very differ­ent if I was a bet­ter Bayesian, but I don’t need a new de­ci­sion rule to model these things. In fact, the MMEU rule makes it harder for me to rea­son about what my cre­dence should be.

Imag­ine you know the un­bal­anced ten­nis game has already oc­curred, and that your friend (who you trust com­pletely) has handed you a com­pli­cated log­i­cal sen­tence that is true if and only if An­abel won. You haven’t figured out whether the sen­tence is true yet (you could see it go­ing ei­ther way), but now you seem jus­tified in say­ing your cre­dence should be ei­ther 0 or 1 (though you don’t know which yet).

But if your cre­dence for “An­abel won” is ei­ther 0% or 100% and you have Knigh­tian un­cer­tainty about which, then you’re go­ing to have a bad time. If the ec­cen­tric bookie from ear­lier tries to offer you a bet on a player of your choice, then there are no odds the bookie can offer that would make you take the bet.

Allow me to re­peat: if you think the ten­nis game has already oc­curred, and have Knigh­tian un­cer­tainty as to whether your cre­dence for “An­abel won” is 0% or 100%, then if you ac­tu­ally use the MMEU rule, you would re­fuse a bet with 1,000,000,000 to 1 odds in fa­vor of the player of your choice.

Yes, I have meta-level prob­a­bil­ity dis­tri­bu­tions over my fu­ture cre­dences for ob­ject-level events. I am not a perfect Bayesian (nor even a very good one). I reg­u­larly mi­suse the in­for­ma­tion I have. It is use­ful for me to be able to rea­son about what my cre­dence should be, if only to com­bat var­i­ous bi­ases such as over­con­fi­dence and base rate ne­glect.

But the MMEU rule doesn’t help me with any of these things. In fact, the MMEU rule only makes it harder for me to rea­son about what my cre­dence should be. It’s a bro­ken tool for a prob­lem that I already know how to ad­dress.

The MMEU rule sees its un­cer­tainty in the world. Above all, us­ing the MMEU rule re­quires that you see some of your un­cer­tainty as part of the world, as part of the ter­ri­tory rather than the map. How is the world-un­cer­tainty sep­a­rated from the mind-un­cer­tainty? Why should I treat them as differ­ent types of thing? The MMEU rule di­vides un­cer­tainty into two ar­bi­trary tasks, and the dis­tinc­tion fails to grant me any use­ful tools.

I already know how to treat my cre­dences as im­pre­cise: I widen my er­ror bars and ex­pect my be­liefs to change (even though I can’t pre­dict how). But I still treat the re­sult­ing im­pre­cise cre­dences as nor­mal un­cer­tainty. In or­der to pre­tend that Knigh­tian un­cer­tainty is fun­da­men­tally differ­ent from nor­mal un­cer­tainty, we have as­sume that it lives in the ter­ri­tory rather than the map. It has to ei­ther be con­trol­led by an ex­ter­nal pro­cess (as Perry be­lieves) or have ex­ter­nal sig­nifi­cance (as Caul be­lieves).

This seems crazy. In­so­far as my cre­dences are bi­ased, I will strive to ad­just ac­cord­ingly. But no mat­ter what I do, they will re­main im­pre­cise, and I have to deal with this as best I can. Claiming that the im­pre­ci­sion de­notes the Ad­ver­sar­ial hand of Na­ture, or that the im­pre­ci­sion de­notes ac­tual Wor­ld­parts over which I have prefer­ences, doesn’t help me ad­dress the real prob­lem.


The MMEU rule fails to solve the prob­lems it set out to solve. And I don’t need it to solve those prob­lems — I already know how to do that with the tools I have.

Most of the ad­vice from the Knigh­tian un­cer­tainty camp is good. It is good to re­al­ize that your cre­dences are im­pre­cise. You should of­ten ex­pect to be sur­prised. In many do­mains, you should widen your er­ror bars. But I already know how to do these things.

Some­times, it is good to re­ject bets. Often, it is good to de­lay de­ci­sions and seek more in­for­ma­tion, and to make sure that you do well in the worst case. But I already know how to do these things. I already know how to trans­late dol­lars into util­ities such that ru­inous bets be­come un­ap­peal­ing.

If the la­bel “Knigh­tian un­cer­tainty” is use­ful to you, then use it. I won’t protest if you want to stick that la­bel on your own im­pre­ci­sion, on your own in­abil­ity to con­sider all of the hy­pothe­ses that your ev­i­dence sup­ports, or on your own ex­pec­ta­tion that the fu­ture will sur­prise you no mat­ter how long you de­liber­ate. I per­son­ally don’t think that “Knigh­tian un­cer­tainty” is a use­ful la­bel for these things, be­cause it is one la­bel that tries to do too much. But if it’s use­ful to you, then use it.

But don’t try to tell me that you should treat it differ­ently! To treat it differ­ently is to act like your un­cer­tainty is in the world, not in you.

If na­ture starts act­ing ad­ver­sar­ial, then iden­tify the parts of re­al­ity that na­ture gets to con­trol and as­sume they’ll act against you. I’ll be be­hind you all the way. If there’s an Ad­ver­sary around, I’ll be para­noid as hell. But through­out it all, I’ll be max­i­miz­ing ex­pected util­ity.

Any­thing else is ei­ther silly, or a mi­s­un­der­stand­ing of the la­bel “util­ity”.

When MMEU is use­ful anyway

The MMEU rule is not fit to be a gen­eral de­ci­sion rule in ideal­ized agents, for all the rea­sons listed above. Ex­pected util­ity max­i­miza­tion may seem reck­less, and MMEU rule at­tempts to offer a fix. How­ever, the pro­posed an­swer is to di­vide un­cer­tainty into two cat­e­gories, and then be both ex­ces­sively defen­sive and ex­ces­sively reck­less at the same time. Un­for­tu­nately, two flaws don’t make a fea­ture.

It may ap­pear that a cor­rect de­ci­sion rule lies some­where in the mid­dle, some­where be­tween the “reck­less” and “defen­sive” ex­tremes. Don’t be fooled: Bayesian ex­pected util­ity max­i­miz­ers nat­u­rally grow defen­sive as they learn that the world is ad­ver­sar­ial, and cau­tion can be writ­ten into the util­ity num­bers. If ever it looks like your prefer­ences are best met by do­ing any­thing other than max­i­miz­ing ex­pected util­ity, then you’ve mis­placed your “util­ity” la­bel.

But, un­for­tu­nately for us, we are hu­mans liv­ing in the real world, and we hap­pen to have mis­placed all our util­ity la­bels.

No­body is offer­ing you bets with pay­offs writ­ten in clearly delineated util­ities. In fact, al­most all of the bets that you are offered by na­ture are delineated in things like money, time, at­ten­tion, friend­ship, or var­i­ous goods and ser­vices. Most of us ex­pe­rience diminish­ing marginal re­turns on most goods, and most of us are risk averse. As such, naïve Bayesian-style gam­bling for money or time or at­ten­tion or any other good is usu­ally a pretty plan.

Al­most all of the bets offered to us by other hu­mans are worse, as they tend to come with ul­te­rior mo­tives at­tached. Un­less you re­ally know what you are do­ing, naïve Bayesian-style gam­bling at a Cas­ino will get you into a whole lot of trou­ble.

Fur­ther­more, we are hu­mans. We use a bunch of faulty heuris­tics, and we are rife with bi­ases. We’re over­con­fi­dent. We suc­cumb to the plan­ning fal­lacy. Peo­ple of­ten don’t dis­t­in­guish be­tween their ex­pected case and their best case. When peo­ple are fac­ing a bet and you ask them to con­sider the worst case, they con­sider things like los­ing the bet, and they don’t con­sider things like re­al­ity be­ing turned into an eter­nal hel­ls­cape be­cause the laws of physics were just kid­ding. So while it doesn’t make sense for an ideal­ized rea­soner to try to max­i­mize util­ity in the worst case, it may make sense for hu­mans to act that way.

If you find that the MMEU rule is a good heuris­tic for you, then use it. But when you do, re­mem­ber why you need it: be­cause hu­mans are over­con­fi­dent, and be­cause most goods have diminish­ing re­turns. If we could fully de­bias you and cor­rectly com­pute the util­ity of each ac­tion available to you (in­clud­ing ac­tions like “don’t take the bet” or “stall”, and in­clud­ing prefer­ences for se­cu­rity and sta­bil­ity), then ex­pected util­ity max­i­miza­tion would be the only sane de­ci­sion rule to use.

Fi­nally, there are times when we might want to treat un­cer­tainty like it’s in the world rather than in our heads. Sup­pose, for ex­am­ple, that you be­lieve the Many Wor­lds in­ter­pre­ta­tion of quan­tum me­chan­ics. It is pos­si­ble to have prefer­ences over Everett branches that don’t treat quan­tum un­cer­tainty like in­ter­nal un­cer­tainty, and this isn’t nec­es­sar­ily crazy. For ex­am­ple, you could have prefer­ences stat­ing that any non-zero Everett branch in which hu­man­ity sur­vives is ex­tremely valuable. In this case, you might be will­ing to work very hard to ex­pand the branch where hu­man­ity sur­vives from zero to some­thing, but be un­will­ing to work pro­por­tion­ally hard to ex­pand it from large to slightly larger. If you’re VNM-ra­tio­nal, this in­di­cates that you treat quan­tum un­cer­tainty differ­ently from men­tal un­cer­tainty.

This doesn’t mean you should use the MMEU rule over quan­tum un­cer­tainty, by any means: Cau­tious Caul is crazy. But it is use­ful to re­mem­ber that when­ever some­thing un­cer­tainty-ish is in the world, you might end up do­ing things that don’t look like ex­pected util­ity max­i­miza­tion, and this can be ra­tio­nal.

A clos­ing anecdote

My re­sponse to some­one ac­tu­ally us­ing the MMEU rule de­pends upon their an­swer to a sim­ple ques­tion:

Why ain’t you rich?

If they sigh and say “be­cause na­ture re­solves all my am­bi­guity, and na­ture hates me”, then I will show them all the money that I won when play­ing ex­actly the same games as them, and I will say

But na­ture doesn’t hate you! In all the games where we had rea­son to be­lieve that na­ture was ad­ver­sar­ial (like when that bookie scanned our brains and came back two days later offer­ing bets that looked re­ally nice at first glance), I played just as defen­sively as you did, and I did just as well as you did. I’m be­hind you all the way when it looks like na­ture has stacked things against us. But in other games, na­ture hasn’t been ad­ver­sar­ial! Re­mem­ber all those times we played Sir Percy’s coin toss? Look how much richer I be­came!

But this agent will only shake their head and say “I’m sorry, but you don’t un­der­stand. I know that na­ture is ad­ver­sar­ial, and I am ab­solutely cer­tain that ev­ery shred of am­bi­guity al­lowed by my cre­dence dis­tri­bu­tion will be used against me. I ac­knowl­edge that na­ture is not act­ing ad­ver­sar­ial against you, and I envy you for your luck, but na­ture is ad­ver­sar­ial against me, and I’m eek­ing out as much util­ity as I can.”

And to that, I will only shake my head and de­part, mourn­ing for their bro­ken pri­ors.

If, in­stead, the agent an­swers “I may not be rich in this wor­ld­part, but there are other wor­ld­parts that showed up in my cre­dence dis­tri­bu­tion where I am richer than you”, then I will shrug.

I care for my Everett-broth­ers as much as you care for your cre­dence-broth­ers, and that car­ing was fac­tored into my util­ity calcu­la­tions. And yet still, I am richer.

“In­deed”, the agent will re­spond with a sage nod. “But while you care for your Everett-broth­ers ac­cord­ing to their mea­sure, I care only about the least con­ve­nient world con­sis­tent with my cre­dence dis­tri­bu­tion: so yes, I am poorer here, but it is fine, be­cause I am richer there.”

Well, maybe. You’ve max­i­mized the min­i­mum odds, but that doesn’t mean that your least con­ve­nient sliver did well. Back when we played the Game of Draw­ers, the sliver of you that faced Box 1 prob­a­bly lost one thou­sand dol­lars, while the sliver of me that faced Box 1 definitely gained two bucks.

“Per­haps. But in ex­pec­ta­tion over my cre­dence dis­tri­bu­tion, that sliver of me has more money.”

But in ex­pec­ta­tion over­all, con­sid­er­ing that Box 2 also ex­ists, I did bet­ter than you.

“I un­der­stand how you find it strange, but these are my prefer­ences. I care only about the world with the worst odds that hap­pens to fit in my cre­dence dis­tri­bu­tion.”

“Con­sider the bet with the thou­sand-sided quan­tum die”, the agent will con­tinue. “In the least con­ve­nient world of that game, you lost 898 util­ity, and there is a ver­sion of me ask­ing how you could let your­self fail so.”

That Everett-brother of mine knew the risks. His suffer­ing and my sor­row was fac­tored into the util­ity calcu­la­tions. Even af­ter ad­just­ing for loss aver­sion and risk aver­sion and my prefer­ences for egal­i­tar­i­anism, he traded his utils to us one-for-one or bet­ter. He would make the trade again in a heart­beat, as would I to oth­ers.

“In that least con­ve­nient world”, the agent will re­ply, “my sliver is ask­ing yours, ‘and what of your Everett-broth­ers, who prof­ited so from your de­spair, know­ing that you would be left suffer­ing in these depths. Do you think they shed tears for you?’”

Don’t worry,

I’ll an­swer, in the plethora of ex­pected wor­lds where I am richer.

We do.