Logical and Indexical Uncertainty

Cross-posted on By Way of Contradiction

Imag­ine I shot a pho­ton at a half silvered mir­ror which re­flects the pho­ton with “prob­a­bil­ity” 12 and lets the pho­ton pass through with “prob­a­bil­ity” 12.

Now, Imag­ine I calcu­lated the trillionth dec­i­mal digit of pi, and checked whether it was even or odd. As a Bayesian, you use the term “prob­a­bil­ity” in this situ­a­tion too, and to you, the “prob­a­bil­ity” that the digit is odd is 12.

What is the differ­ence be­tween these too situ­a­tions? As­sum­ing the many wor­lds in­ter­pre­ta­tion of quan­tum me­chan­ics, the first prob­a­bil­ity comes from in­dex­i­cal un­cer­tainty, while the sec­ond comes from log­i­cal un­cer­tainty. In in­dex­i­cal un­cer­tainty, both pos­si­bil­ities are true in differ­ent parts of what­ever your mul­ti­verse model is, but you are un­sure which part of that mul­ti­verse you are in. In log­i­cal un­cer­tainty, only one of the pos­si­bil­ities is true, but you do not have in­for­ma­tion about which one. It may seem at first like this should not change our de­ci­sion the­ory, but I be­lieve there are good rea­sons why we should care about what type of un­cer­tainty we are talk­ing about.

I pre­sent here 6 rea­sons why we po­ten­tially care about the 2 differ­ent types of un­cer­tain­ties. I do not agree with all of these ideas, but I pre­sent them any­way, be­cause it seems rea­son­able that some peo­ple might ar­gue for them. Is there any­thing I have missed?

1) Anthropics

Sup­pose Sleep­ing Beauty vol­un­teers to un­dergo the fol­low­ing ex­per­i­ment, which is de­scribed to her be­fore it be­gins. On Sun­day she is given a drug that sends her to sleep, and a coin is tossed. If the coin lands heads, Beauty is awak­ened and in­ter­viewed on Mon­day, and then the ex­per­i­ment ends. If the coin comes up tails, she is awak­ened and in­ter­viewed on Mon­day, given a sec­ond dose of the sleep­ing drug that makes her for­get the events of Mon­day only, and awak­ened and in­ter­viewed again on Tues­day. The ex­per­i­ment then ends on Tues­day, with­out flip­ping the coin again. Beauty wakes up in the ex­per­i­ment and is asked, “With what sub­jec­tive prob­a­bil­ity do you be­lieve that the coin landed tails?”

Peo­ple ar­gue about whether the “cor­rect an­swer” to this ques­tion should be 13 or 12. Some say that the ques­tion is malformed, and needs to be rewrit­ten as a de­ci­sion the­ory ques­tion. Another view is that the ques­tion ac­tu­ally de­pends on the coin flip:

If the coin flip is a in­dex­i­cal coin flip, then there are effec­tively 3 copies of sleep­ing beauty, and in 1 on those copies, the coin came up tails, so you should say 13. On the other hand, if it is a log­i­cal coin flip, then you can­not com­pare the two copies of you wak­ing up in one pos­si­ble world with the one copy of you wak­ing up in the other pos­si­ble world. Only one of the wor­lds is log­i­cally con­sis­tent. The trillionth digit of pi is not changed by you wak­ing up, and you will wake up re­gard­less of the state of the trillionth digit of pi.

2) Risk Aversion

Imag­ine that I were to build a dooms­day de­vice. The de­vice flips a coin, and if the coin comes up heads, it de­stroys the Earth, and ev­ery­thing on it. If the coin comes up tails, it does noth­ing. Would you pre­fer if the coin flip were a log­i­cal coin flip, or a in­dex­i­cal coin flip?

You prob­a­bly pre­fer the in­dex­i­cal coin flip. It feels more safe to have the world con­tinue on in half of the uni­verses, then to risk de­stroy­ing the world in all uni­verses. I do not think this feel­ing arises from bi­assed think­ing, but in­stead from a true differ­ence in prefer­ences. To me, de­stroy­ing the world in all of the uni­verses is ac­tu­ally much more than twice as bad as de­stroy­ing the world in half of the uni­verses.

3) Prefer­ences vs Beliefs

In up­date­less de­ci­sion the­ory, you want to choose the out­put of your de­ci­sion pro­ce­dure. If there are mul­ti­ple copies of your­self in the uni­verse, you do not ask about which copy you are, but in­stead just choose the out­put which max­i­mizes your util­ity of the uni­verse in which all of your copies out­put that value. The “ex­pected” util­ity comes from your log­i­cal un­cer­tainty about what the uni­verse is like. There is not much room in this the­ory for in­dex­i­cal un­cer­tainty. In­stead the in­dex­i­cal un­cer­tainty is en­coded into your util­ity func­tion. The fact that you pre­fer to be given a re­ward with in­dex­i­cal prob­a­bil­ity 99% than given a re­ward with in­dex­i­cal prob­a­bil­ity 1% should in­stead be viewed as you prefer­ring the uni­verse in which 99% of the copies of you re­ceive the re­ward to the uni­verse in which 1% of the copies of you re­ceive the re­ward.

In this view, it seems that in­dex­i­cal un­cer­tainty should be viewed as prefer­ences, while log­i­cal un­cer­tainty should be viewed as be­liefs. It is im­por­tant to note that this all adds up to nor­mal­ity. If we are try­ing to max­i­mize our ex­pected util­ity, the only thing we do with prefer­ences and be­liefs is mul­ti­ply them to­gether, so for the most part it doesn’t change much to think of some­thing as a prefer­ence as op­posed to be­lief.

4) Altruism

In Sub­jec­tive Altru­ism, I asked a ques­tion about whether or not when be­ing al­tru­is­tic to­wards some­one else, you should try to max­i­mize their ex­pected util­ity rel­a­tive to you prob­a­bil­ity func­tion or rel­a­tive to their prob­a­bil­ity func­tion. If your an­swer was to choose the op­tion which max­i­mizes your ex­pec­ta­tion of their util­ity, then it is ac­tu­ally very im­por­tant whether in­dex­i­cal un­cer­tainty is a be­lief or a prefer­ence.

5) Suffi­cient Reflection

In the­ory, given enough time, you can set­tle log­i­cal un­cer­tain­ties just by think­ing about them. How­ever, given enough time, you can set­tle in­dex­i­cal un­cer­tain­ties by mak­ing ob­ser­va­tions. It seems to me that there is not a mean­ingful differ­ence be­tween ob­ser­va­tions that take place en­tirely within your mind and ob­ser­va­tions about the out­side world. I there­fore do not think this differ­ence means very much.

6) Consistency

Log­i­cal un­cer­tainty seems like it is harder to model, since it means you are as­sign­ing prob­a­bil­ities to pos­si­bly in­con­sis­tent the­o­ries, and all in­con­sis­tent the­o­ries are log­i­cally equiv­a­lent. You might want some mea­sure of equiv­alence of your var­i­ous the­o­ries, and it would have to be differ­ent from log­i­cal equiv­alence. In­dex­i­cal un­cer­tainty does not ap­pear to have the same is­sues, at least not in an ob­vi­ous way. How­ever, I think this is­sue only comes from look­ing at the prob­lem in the wrong way. I be­lieve that prob­a­bil­ities should only be as­signed to log­i­cal state­ments, not to en­tire the­o­ries. Then, since ev­ery­thing is finite, you can treat sen­tences as equiv­a­lent only af­ter you have proven them equiv­a­lent.

7) Coun­ter­fac­tual Mugging

Omega ap­pears and says that it has just tossed a fair coin, and given that the coin came up tails, it de­cided to ask you to give it $100. What­ever you do in this situ­a­tion, noth­ing else will hap­pen differ­ently in re­al­ity as a re­sult. Nat­u­rally you don’t want to give up your $100. But Omega also tells you that if the coin came up heads in­stead of tails, it’d give you $10000, but only if you’d agree to give it $100 if the coin came up tails.

It seems rea­son­able to me that peo­ple might feel very differ­ent about this ques­tion based on whether or not the coin is log­i­cal or in­dex­i­cal. To me, it makes sense to give up the $100 ei­ther way, but it seems pos­si­ble to change the ques­tion in such a way that the type of coin flip might mat­ter.