Logical and Indexical Uncertainty

Imag­ine I shot a pho­ton at a half silvered mir­ror which re­flects the pho­ton with “prob­a­bil­ity” 12 and lets the pho­ton pass through with “prob­a­bil­ity” 12.

Now, Imag­ine I calcu­lated the trillionth dec­i­mal digit of pi, and checked whether it was even or odd. As a Bayesian, you use the term “prob­a­bil­ity” in this situ­a­tion too, and to you, the “prob­a­bil­ity” that the digit is odd is 12.

What is the differ­ence be­tween these too situ­a­tions? As­sum­ing the many wor­lds in­ter­pre­ta­tion of quan­tum me­chan­ics, the first prob­a­bil­ity comes from in­dex­i­cal un­cer­tainty, while the sec­ond comes from log­i­cal un­cer­tainty. In in­dex­i­cal un­cer­tainty, both pos­si­bil­ities are true in differ­ent parts of what­ever your mul­ti­verse model is, but you are un­sure which part of that mul­ti­verse you are in. In log­i­cal un­cer­tainty, only one of the pos­si­bil­ities is true, but you do not have in­for­ma­tion about which one. It may seem at first like this should not change our de­ci­sion the­ory, but I be­lieve there are good rea­sons why we should care about what type of un­cer­tainty we are talk­ing about.

I pre­sent here 6 rea­sons why we po­ten­tially care about the 2 differ­ent types of un­cer­tain­ties. I do not agree with all of these ideas, but I pre­sent them any­way, be­cause it seems rea­son­able that some peo­ple might ar­gue for them. Is there any­thing I have missed?

1) Anthropics

Sup­pose Sleep­ing Beauty vol­un­teers to un­dergo the fol­low­ing ex­per­i­ment, which is de­scribed to her be­fore it be­gins. On Sun­day she is given a drug that sends her to sleep, and a coin is tossed. If the coin lands heads, Beauty is awak­ened and in­ter­viewed on Mon­day, and then the ex­per­i­ment ends. If the coin comes up tails, she is awak­ened and in­ter­viewed on Mon­day, given a sec­ond dose of the sleep­ing drug that makes her for­get the events of Mon­day only, and awak­ened and in­ter­viewed again on Tues­day. The ex­per­i­ment then ends on Tues­day, with­out flip­ping the coin again. Beauty wakes up in the ex­per­i­ment and is asked, “With what sub­jec­tive prob­a­bil­ity do you be­lieve that the coin landed tails?”

Peo­ple ar­gue about whether the “cor­rect an­swer” to this ques­tion should be 13 or 12. Some say that the ques­tion is malformed, and needs to be rewrit­ten as a de­ci­sion the­ory ques­tion. Another view is that the ques­tion ac­tu­ally de­pends on the coin flip:

If the coin flip is a in­dex­i­cal coin flip, then there are effec­tively 3 copies of sleep­ing beauty, and in 1 on those copies, the coin came up tails, so you should say 13. On the other hand, if it is a log­i­cal coin flip, then you can­not com­pare the two copies of you wak­ing up in one pos­si­ble world with the one copy of you wak­ing up in the other pos­si­ble world. Only one of the wor­lds is log­i­cally con­sis­tent. The trillionth digit of pi is not changed by you wak­ing up, and you will wake up re­gard­less of the state of the trillionth digit of pi.

2) Risk Aversion

Imag­ine that I were to build a dooms­day de­vice. The de­vice flips a coin, and if the coin comes up heads, it de­stroys the Earth, and ev­ery­thing on it. If the coin comes up tails, it does noth­ing. Would you pre­fer if the coin flip were a log­i­cal coin flip, or a in­dex­i­cal coin flip?

You prob­a­bly pre­fer the in­dex­i­cal coin flip. It feels more safe to have the world con­tinue on in half of the uni­verses, then to risk de­stroy­ing the world in all uni­verses. I do not think this feel­ing arises from bi­assed think­ing, but in­stead from a true differ­ence in prefer­ences. To me, de­stroy­ing the world in all of the uni­verses is ac­tu­ally much more than twice as bad as de­stroy­ing the world in half of the uni­verses.

3) Prefer­ences vs Beliefs

In this view, it seems that in­dex­i­cal un­cer­tainty should be viewed as prefer­ences, while log­i­cal un­cer­tainty should be viewed as be­liefs. It is im­por­tant to note that this all adds up to nor­mal­ity. If we are try­ing to max­i­mize our ex­pected util­ity, the only thing we do with prefer­ences and be­liefs is mul­ti­ply them to­gether, so for the most part it doesn’t change much to think of some­thing as a prefer­ence as op­posed to be­lief.

4) Altruism

In Sub­jec­tive Altru­ism, I asked a ques­tion about whether or not when be­ing al­tru­is­tic to­wards some­one else, you should try to max­i­mize their ex­pected util­ity rel­a­tive to you prob­a­bil­ity func­tion or rel­a­tive to their prob­a­bil­ity func­tion. If your an­swer was to choose the op­tion which max­i­mizes your ex­pec­ta­tion of their util­ity, then it is ac­tu­ally very im­por­tant whether in­dex­i­cal un­cer­tainty is a be­lief or a prefer­ence.

5) Suffi­cient Reflection

In the­ory, given enough time, you can set­tle log­i­cal un­cer­tain­ties just by think­ing about them. How­ever, given enough time, you can set­tle in­dex­i­cal un­cer­tain­ties by mak­ing ob­ser­va­tions. It seems to me that there is not a mean­ingful differ­ence be­tween ob­ser­va­tions that take place en­tirely within your mind and ob­ser­va­tions about the out­side world. I there­fore do not think this differ­ence means very much.

6) Consistency

Log­i­cal un­cer­tainty seems like it is harder to model, since it means you are as­sign­ing prob­a­bil­ities to pos­si­bly in­con­sis­tent the­o­ries, and all in­con­sis­tent the­o­ries are log­i­cally equiv­a­lent. You might want some mea­sure of equiv­alence of your var­i­ous the­o­ries, and it would have to be differ­ent from log­i­cal equiv­alence. In­dex­i­cal un­cer­tainty does not ap­pear to have the same is­sues, at least not in an ob­vi­ous way. How­ever, I think this is­sue only comes from look­ing at the prob­lem in the wrong way. I be­lieve that prob­a­bil­ities should only be as­signed to log­i­cal state­ments, not to en­tire the­o­ries. Then, since ev­ery­thing is finite, you can treat sen­tences as equiv­a­lent only af­ter you have proven them equiv­a­lent.

7) Coun­ter­fac­tual Mugging

Omega ap­pears and says that it has just tossed a fair coin, and given that the coin came up tails, it de­cided to ask you to give it \$100. What­ever you do in this situ­a­tion, noth­ing else will hap­pen differ­ently in re­al­ity as a re­sult. Nat­u­rally you don’t want to give up your \$100. But Omega also tells you that if the coin came up heads in­stead of tails, it’d give you \$10000, but only if you’d agree to give it \$100 if the coin came up tails.

It seems rea­son­able to me that peo­ple might feel very differ­ent about this ques­tion based on whether or not the coin is log­i­cal or in­dex­i­cal. To me, it makes sense to give up the \$100 ei­ther way, but it seems pos­si­ble to change the ques­tion in such a way that the type of coin flip might mat­ter.

• The sec­ond prob­lem can eas­ily be ex­plained by hav­ing your util­ity func­tion not be lin­ear in the num­ber of non-de­stroyed uni­verses.

• Now, Imag­ine I calcu­lated the trillionth dec­i­mal digit of pi, and checked whether it was even or odd. As a Bayesian, you use the term “prob­a­bil­ity” in this situ­a­tion too, and to you, the “prob­a­bil­ity” that the digit is odd is 12.

To me the prob­a­bil­ity that the trillionth dec­i­mal digit of Pi is odd is about to 0.05. The trillionth digit of Pi is 2 (but there is about a one it twenty chance that I’m con­fused). For some rea­son peo­ple keep us­ing that num­ber as an ex­am­ple of a log­i­cal un­cer­tainty so I looked it up.

When a log­i­cal coin is:

b) Com­par­a­tively triv­ial to re-calcu­late. (Hu­mans have calcu­lated the two quadrillionth digit of Pi. The trillionth digit is triv­ial.)
c) Used suffi­ciently fre­quently that peo­ple know not just where to look up the an­swer but re­mem­ber it from ex­pe­rience.

...Then it is prob­a­bly time for us to choose a bet­ter coin. (Un­for­tu­nately I haven’t yet found a func­tion that ex­hibits all the desider­a­tum I have for an op­ti­mal log­i­cally un­cer­tain coin.)

• (Un­for­tu­nately I haven’t yet found a func­tion that ex­hibits all the desider­a­tum I have for an op­ti­mal log­i­cally un­cer­tain coin.)

Is floor(exp(3^^^3)) even or odd?

• For sce­nario 7, I think I may have gen­er­ated a type of situ­a­tion where the type of coin flip might mat­ter, but I feel like I also may have made an er­ror some­where. I’ll post what I have so far for ver­ifi­ca­tion.

To ex­plain, imag­ine that Omega knows in ad­vance, that the log­i­cal coin flip is go­ing to be tails ev­ery time he flips the log­i­cal coin, be­cause odd is tails and he is ask­ing about the first digit of pi, which is odd.

Now, in this case, you would also know the first digit of pi is odd, so that wouldn’t be an in­for­ma­tion asym­me­try. You just wouldn’t play if you knew Omega has made a log­i­cal de­ci­sion to use a log­i­cal coin that came up tails, be­cause you would never even hy­po­thet­i­cally have ever got­ten money. It would be as if Omega said: “1=1, so I de­cided to ask you to give me \$100 dol­lars. What­ever you do in this situ­a­tion, noth­ing else will hap­pen differ­ently in re­al­ity as a re­sult. Nat­u­rally you don’t want to give up your \$100. But I’m also tel­ling you that if 0=1, I’d give you \$10000, but only if you’d agree to give me \$100 if 1=1.” It seems rea­son­able to not give Omega money in that case.

How­ever, since Omega has more com­put­ing power, there are always go­ing to be log­i­cal coins that look ran­dom to you that Omega can use: Maybe the trillionth digit of pi is un­known to you, but Omega calcu­lated it be­fore hand think­ing about mak­ing you any offers, and it hap­pens to have been odd/​tails.

Omega can even do some­thing which has in­dex­i­cal ran­dom com­po­nents and log­i­cal com­po­nents that ends up be­ing log­i­cally calcu­lat­able. If Omega rolls an in­dex­i­cal six sided die, adds 761(A ‘ran­dom’ seed) to the num­ber, and then chooses to check the even/​odd sta­tus any­where from the 762nd digit of pi through the 767th digit of pi from the re­sults of the die roll. If it’s odd, the coin is tails. That is the Feyn­mann point, http://​​en.wikipe­dia.org/​​wiki/​​Feyn­man_point All the digits are 9, which is odd, so the coin is always tails.

If this is a sim­ple in­dex­i­cal coin flip, Omega can’t have that sort of ad­vance knowl­edge.

How­ever, what I find con­fus­ing is that ask­ing about the recorded past re­sult of an in­dex­i­cal coin flip ap­pears to be a type of log­i­cal un­cer­tainty. So since this has oc­curred in the past, what seems to be the ex­act same rea­son­ing would tell me not to give Omega the money right now, be­cause I can’t trust that Omega won’t ex­ploit the in­for­ma­tion asym­me­try.

This is where I was con­cerned I had missed some­thing, but I’m not sure what if any­thing I am miss­ing.

• I think that all you are ob­serv­ing here is that your prob­a­bil­ity that other agents know the re­sult of the coin flip changes be­tween the two situ­a­tions. How­ever, oth­ers can know the re­sult for ei­ther type of flip, so this is not re­ally a qual­i­ta­tive differ­ence. It is a way in which other in­for­ma­tion about the coin flip mat­ters, other than just whether or not it is log­i­cal.

You achieve this by mak­ing the coin flip cor­re­lated with other facts, which is what you did. (I think made more con­fus­ing and veiled by the fact that these facts are within the mind of Omega.)

Omega does not have to have ad­vance knowl­edge of an in­dex­i­cal coin flip. He just needs knowl­edge, which he can have.

• Your point 2 seems to be about an­throp­ics, not risk aver­sion. If you re­place “de­stroy­ing the world” with “kick­ing a cute puppy”, I be­come in­differ­ent be­tween in­dex­i­cal and log­i­cal coins. If it’s “de­stroy­ing the world painfully for all in­volved”, I also get closer to be­ing in­differ­ent. Like­wise if it’s “de­stroy­ing the world in­stantly and painlessly”, but there’s a 1% in­dex­i­cal chance that the world will go on any­way. The differ­ence only seems to mat­ter when you imag­ine all your copies dis­ap­pear­ing.

And even in that case, I’m not com­pletely sure that I pre­fer the in­dex­i­cal coin. The “cor­rect” mul­ti­verse the­ory might be one that in­cludes log­i­cally in­con­sis­tent uni­verses any­way (“Teg­mark level 5″), so in­dex­i­cal and log­i­cal un­cer­tainty be­come more similar to each other. That’s kinda the ap­proach I took when try­ing to solve Coun­ter­fac­tual Mug­ging with a log­i­cal coin.

• I just wanted to note that this is not a le­gal ar­gu­ment, if you need 6 rea­sons and not 1, then none of the 6 are any good.

• I was not try­ing to ar­gue any­thing. I was try­ing to pre­sent a list of differ­ences. I per­son­ally think 2 and 3 each on their own are enough to jus­tify the claim that the dis­tinc­tion is im­por­tant.

• It oc­curs to me that your ar­gu­ment 1 is set up strangely: it as­sumes perfect cor­re­la­tion be­tween flips, which is not how a coin is as­sumed to be­have. If in­stead you pre-pick differ­ent large un­cor­re­lated digits for each flip, then the differ­ence be­tween the un­cer­tain­ties dis­ap­pears. It seems that similar cor­re­la­tions are to blame for the car­ing about the type of un­cer­tainty in other cases, as well.

• I have no idea what you are say­ing here.

• This should be in Main.

• Disagree. It’s ask­ing in­ter­est­ing ques­tions, but not giv­ing many an­swers; it’s perfect for Dis­cus­sion.

• Yeah, that’s what I was think­ing.

• I’m not so sure, but if oth­ers agree, I’ll up­grade it. Do I just do that by edit­ing it and post­ing to main in­stead?

• I voted up, but do not think it should go in main.

• My mea­sure of “oth­ers agree” was Vul­ture’s com­ment’s karma, not the karma of the post. I think that that mea­sure set­tles the fact that I put the post in the cor­rect place.

• I recom­mend wait­ing for a cou­ple of days, and if you get 20 karma or so, then move to Main.