The randomness/​ignorance model solves many anthropic problems

(Fol­low-up to Ran­dom­ness vs Ig­no­rance and Refer­ence Classes for Ran­dom­ness)

I’ve ar­gued that all un­cer­tainty can be di­vided into ran­dom­ness and ig­no­rance and that this model is free of con­tra­dic­tions. Its pur­pose is to re­solve an­thropic puz­zles such as the Sleep­ing Beauty prob­lem.

If the model is ap­plied to these prob­lems, they ap­pear to be un­der­speci­fied. De­tails re­quired to cat­e­go­rize the rele­vant un­cer­tainty are miss­ing, and this un­der­speci­fi­ca­tion might ex­plain why there is still no con­sen­sus on the cor­rect an­swers. How­ever, if the miss­ing pieces are added in such a way that all un­cer­tainty can be cat­e­go­rized as ran­dom­ness, the model does give an an­swer. Do­ing this doesn’t just solve a var­i­ant of the prob­lem, it also high­lights the parts that make these prob­lems dis­tinct from each other.

I’ll go through two ex­am­ples to demon­strate this. The un­der­ly­ing prin­ci­ples are sim­ple, and the model can be ap­plied to ev­ery an­thropic prob­lem I know of.

1. Sleep­ing Beauty

In the origi­nal prob­lem, a coin is thrown at the be­gin­ning to de­cide be­tween the one-in­ter­view and the two-in­ter­view ver­sion of the ex­per­i­ment. In our vari­a­tion, we will in­stead re­peat the ex­per­i­ment times and have of those run the one-in­ter­view ver­sion, and an­other run the two-in­ter­view ver­sion. Sleep­ing Beauty knows this but isn’t be­ing told which ver­sion she’s cur­rently par­ti­ci­pat­ing in. This leads to in­stances of Sleep­ing Beauty wak­ing up on Mon­day, and in­stances of her wak­ing up on Tues­day. All in­stances fall into the same refer­ence class, be­cause there is no in­for­ma­tion available to tell them apart. Thus, Sleep­ing Beauty’s un­cer­tainty about the cur­rent day is ran­dom with prob­a­bil­ity for Mon­day.

2. Pre­sump­tu­ous Philosopher

In the origi­nal prob­lem, the de­bate is about the ques­tion of how the size of the uni­verse in­fluences the prob­a­bil­ity that the uni­verse is large, but it is un­speci­fied whether our cur­rent uni­verse is the only uni­verse.

Let’s fill in the blanks. Sup­pose there is one uni­verse at the base of re­al­ity which runs many simu­la­tions, one of them be­ing ours. The simu­lated uni­verses can’t run simu­la­tions them­selves, so there are only two lay­ers. Ex­actly half of their simu­la­tions are of “small” uni­verses (say with peo­ple), and the other half are of “large” uni­verses (say with peo­ple). All uni­verses look iden­ti­cal from the in­side.

Once again, there is only one refer­ence class. Since there is an equal num­ber of small and large uni­verses, ex­actly out of mem­bers of the class are lo­cated in large uni­verses. If we know all this, then (un­like in the origi­nal prob­lem) our un­cer­tainty about which uni­verse we live in is clearly ran­dom with prob­a­bil­ity i.e. for the uni­verse be­ing large.

Bostrom came up with the Pre­sump­tu­ous Philoso­pher prob­lem as an ar­gu­ment against SIA (which is one of the two main an­thropic the­o­ries, and the one which an­swers on Sleep­ing Beauty). No­tice how it is about the size of the uni­verse, i.e. some­thing that might never be re­peated, where the an­swer might always be the same. This is no co­in­ci­dence. SIA tends to al­ign with the ran­dom­ness/​ig­no­rance model when­ever all un­cer­tainty col­lapses into ran­dom­ness, and to di­verge when­ever it doesn’t. Nat­u­rally, the way to con­struct a thought ex­per­i­ment where SIA ap­pears to be over­con­fi­dent is to make it so the rele­vant un­cer­tainty might plau­si­bly be ig­no­rance. This is an ex­am­ple of how I be­lieve the ran­dom­ness/​ig­no­rance model adds to our un­der­stand­ing of these prob­lems.

So far I haven’t talked about how the model com­putes prob­a­bil­ity if the rele­vant un­cer­tainty is ig­no­rance. In fact it be­haves like SSA (rather than SIA), but the ar­gu­ment is lengthy. For now, sim­ply as­sume it’s ag­nos­tic.