Preference For (Many) Future Worlds

Fol­lowup to: Quan­tum Rus­sian Roulette; The Do­main of Your Utility Function

The only way to win is cheat
And lay it down be­fore I’m beat
and to an­other give my seat­
for that’s the only painless feat.

Suicide is painless
It brings on many change­s
and I can take or leave it if I please.

-- M.A.S.H.

Let us pre­tend, for the mo­ment, that we are ra­tio­nal Ex­pected Utility Max­imisers. We make our de­ci­sions with the in­ten­tion of achiev­ing out­comes that we judge to have high util­ity. Out­comes that satisfy our prefer­ences. Since de­vel­op­ments in physics have led us to aban­don the no­tion of a sim­ple sin­gle fu­ture world our de­ci­sion mak­ing pro­cess must now grap­ple with the no­tion that some of our de­ci­sions will re­sult in more than one fu­ture out­come. Not sim­ply the pos­si­bil­ity of more than one fu­ture out­come but mul­ti­ple wor­lds, each of which with differ­ent events oc­cur­ring. In ex­treme ex­am­ples we can con­sider the pos­si­bil­ity of stak­ing our very lives on the toss of a quan­tum die, figur­ing that we are go­ing to live in one world any­way!

How do prefer­ences ap­ply when mak­ing de­ci­sions with Many Wor­lds? The de­scrip­tion I’m giv­ing here will be ob­vi­ous to the ex­tent of be­ing triv­ial to some, con­fus­ing to oth­ers and, I ex­pect, con­sid­ered out­right wrong by oth­ers. But it is the post that I want to be able to link to when­ever the ques­tion “Do you be­lieve in quan­tum im­mor­tal­ity?” comes up. Be­cause it is a wrong ques­tion!

Utility Ap­plies to Uni­verses, Not (just) Men­tal States

I am tak­ing for granted here that when we are con­sid­er­ing util­ity and the max­imi­sa­tion thereof we are not limit­ing our­selves to max­imis­ing how satis­fied we ex­pect to feel with an out­come in the fu­ture. Peter_de_Blanc cov­ered this topic in the post The Do­main of Your Utility Func­tion. Con­sider the fol­low­ing illus­tra­tion of eval­u­at­ing ex­pected util­ity:


Note in par­tic­u­lar the high­lighted ar­row from Ex­trap­o­lated Fu­ture A to Utility of A and not from Ex­trap­o­lated Mind A to Utility A. Con­trary to sur­pris­ingly fre­quent as­sump­tions, the sane con­struc­tion of ex­pected util­ity cares not just how we ex­pect our own mind to be. Our util­ity func­tion ap­plies to our whole ex­trap­o­la­tion of the fu­ture. We can care about our friends, fam­ily and strangers. We can have prefer­ences about the pas­sen­gers in a rocket that will have flown out­side our fu­ture light cone. We can have prefer­ences about the state of the uni­verse even in fu­tures where we die.

Quan­tum Rus­sian Roulette

Given a source of quan­tum ran­dom­ness we can con­sider a game peo­ple may play in a hope to ex­ploit the idea of quan­tum im­mor­tal­ity as a way to Get Rich Quick! Quan­tum Rus­sian Roulette is an ob­vi­ous ex­am­ple. 16 peo­ple pool all their money. They then use a de­vice that uses 4 quan­tum bits to pick one of the play­ers to live and kills the other 15 painlessly. Win­ner takes all! Each of the play­ers will wake up in a world where they are the al­ive, well and rich. One hopes they were at least wise enough not to play the game against their loved ones!

Now, what does the de­ci­sion of whether to play QRoulette look like?

Two way Quantum Roulette.


The idea is that af­ter go­ing to sleep with $300k you will always and only wake up with $600k. Thus it is said by some that if you ac­cept the Many Wor­lds im­pli­ca­tions of Quan­tum Me­chan­ics you should con­sider it ra­tio­nal to vol­un­teer for Quan­tum Roulettes when­ever they be­come available. Kind of like fu­ture ori­ented an­thropic plan­ning or some­thing.

Friends and Fam­ily: The Ex­ter­nal­ities Cop-Out

When some­one dies the dam­age done is to more than just the de­ceased. There are all sorts of ex­ter­nal­ities, most of them nega­tive. The econ­omy is dam­aged, many peo­ple grieve. As such a com­mon re­join­der to a quan­tum suicide ad­vo­cate is “The rea­son that I wouldn’t com­mit quan­tum suicide is that all the peo­ple who care about me would be dis­traught”. That is ac­tu­ally a good rea­son. In fact it should be more than suffi­cient to pre­vent just about ev­ery­one from get­ting quan­tum suici­dal all by it­self. But it misses the point.

Con­sider the Least Con­ve­nient Pos­si­ble World. The Quan­tum Dooms­day Lot­tery. It’s a solo game:

  • Build a de­vice that can send our sun su­per­nova. Or even just obliter­ate earth com­pre­hen­sively.

  • Build an­other de­vice that can gen­er­ate quan­tum bits and use them to buy lot­tery tick­ets.

  • A third de­vice anaes­thetises you, ac­ti­vates the quan­tum lot­tery play­ing ma­chine.

  • If you don’t win the lot­tery the earth is obliter­ated.

  • In a fu­ture ev­erett branches in which you wake up and the species ex­ists, and in fact in all such branches, you win the lot­tery.

Those who have “other peo­ple will be sad” as their true re­jec­tion of the op­tion of play­ing quan­tum roulette can be ex­pected to jump at the op­por­tu­nity to get rich quick with­out mak­ing loved ones grieve. Most oth­ers will in­stead look a lit­tle closer and find a bet­ter rea­son not to kill them­selves and de­stroy the world.

Failure Amplification

In the sce­nar­ios above only two op­tions are con­sid­ered. “Win” and “lose/​oblivion/​ar­maged­don”. Of course the real world is not so sim­ple. All sorts of other events could oc­cur that aren’t ac­counted for in the model. For ex­am­ple, the hard­ware could break, caus­ing your ex­e­cu­tion to be botched. In­stead of wak­ing up only when you win you also wake up when you lose but the ma­chine only man­ages to make you “mostly dead” then breaks. You sur­vive in huge amounts of pain, crip­pled and with 40 less IQ points.

Now, you may have ex­treme con­fi­dence in your en­g­ineer. You ex­pect the ma­chine to work as speci­fied 99.9999% of the time. In the other 0.0001% of cases minor fluc­tu­a­tions in the power source or a hit by a cos­mic ray some­where trig­gers a vuln­er­a­bil­ity and a failed ex­e­cu­tion oc­curs. Hu­mans ac­cept that level of risk on a daily ba­sis with such ac­tivi­ties as driv­ing a car or breath­ing (in New York). (In this case we would call it a “Micronot­mort”.) Yet you are quan­tum suici­dal and have de­cided that all Everett branches in which you die don’t count.

So, if you en­gage in a 1:2,000,000 Quan­tum Lot­tery you can con­sider (ap­prox­i­mately) the fol­low­ing out­comes: (1 live:1,999,998 die:2 crip­pled). Hav­ing de­cided that 1,999,998 of those branches don’t count you are left with a one in three chance of be­ing a crip­ple. Mind you given the amount of money that would be staked in such a lot­tery it would prob­a­bly be pretty good deal fi­nan­cially!

What does this mean?

Don’t try to use Quan­tum Suicide to brute force cryp­to­anal­y­sis prob­lems. It just isn’t go­ing to work even if you think the the­o­ret­i­cal ex­pected out­come to be worth­while! You aren’t that good at build­ing stuff.

While this is also also an in­ter­est­ing topic in it­self (I be­lieve there are posts about it float­ing around here some­place) it is also some­what beside the main point. Is quan­tum suicide a good idea even when you iron out all the tech­ni­cal difficul­ties?

Quan­tum Sour Grapes

Peo­ple have been gam­bling for mil­len­nia. Most of the peo­ple who have lost bets have done so with­out kil­ling them­selves. Much can be learned from this. For ex­am­ple, that kil­ling your­self is worse than not kil­ling your­self. This in­tu­ition is one that should fol­low over to ‘quan­tum’ gam­bles rather straight­for­wardly. Con­sider this al­ter­na­tive:


Don't fucking kill yourself

Here we see just as many fu­tures in which Blue ends up with the jack­pot. And this time Blue man­ages not to kill him­self in branches where he loses. Blue is sad for a while un­til he makes some more money and gets over it. He isn’t dead. This is ob­vi­ously just plain bet­ter. The only rea­sons for Blue to kill him­self when he loses would be con­trived ex­am­ples such as those in­volv­ing tor­ture that can only be avoided by pay­ment that can not be ar­ranged any other way.

You get just as much quan­tum ‘win­ning­ness’ if you don’t kill your­self. For this rea­son I con­sider games like Quan­tum Roulette to be poorly named, par­tic­u­larly when “Quan­tum Im­mor­tal­ity” is also men­tioned. I much pre­fer the la­bel “Quan­tum Sour Grapes”.

Les­son: Don’t make de­ci­sions based on an­ti­ci­pated fu­ture an­thropic rea­son­ing. That’s just ask­ing for trou­ble.

Not Wrong (But Per­haps Crazy)

I per­son­ally con­sider any­one who wants to play quan­tum roulette to be crazy. And any­one who wanted to up the stakes to a dooms­day quan­tum var­i­ant would be a threat to be thwarted at all costs if they were not too crazy to even take se­ri­ously. But I ar­gue that this is a mat­ter of prefer­ence and not (just) one of the­o­ret­i­cal un­der­stand­ing. We care about pos­si­ble fu­ture states of the states of the uni­verse—of all of the uni­verse. If we hap­pen to pre­fer fu­tures of our cur­rent Everett branch where most sub-branches have us dead but one does not then that we can do that.