Anthropic Atheism

(Cross­posted from my blog)

I’ve been de­vel­op­ing an ap­proach to an­thropic ques­tions that I find less con­fus­ing than oth­ers, which I call An­thropic Athe­ism (AA). The name is a snarky refer­ence to the on­tolog­i­cally ba­sic sta­tus of ob­servers (souls) in other an­thropic the­o­ries. I’ll have to ex­plain my­self.

We’ll start with what I call the “Sher­lock Holmes Ax­iom” (SHA), which will form the epistemic back­ground for my ap­proach:

How of­ten have I said to you that when you have elimi­nated the im­pos­si­ble, what­ever re­mains, how­ever im­prob­a­ble, must be the truth?

Which I rein­ter­pret as “Rea­son by elimi­nat­ing those pos­si­bil­ities in­con­sis­tent with your ob­ser­va­tions. Pe­riod.” I use this as a ba­sis of episte­mol­ogy. Ba­si­cally, think of all pos­si­ble world-his­to­ries, as­sign prob­a­bil­ity to each of them ac­cord­ing to what­ever prin­ci­ples (eg oc­cams ra­zor), elimi­nate in­con­sis­ten­cies, and renor­mal­ize your prob­a­bil­ities. I won’t go into the de­tails, but it turns out that prob­a­bil­ity the­ory (eg Bayes the­o­rem) falls out of this just fine when you trans­late P(E|H) as “por­tion of pos­si­ble wor­lds con­sis­tent with H that pre­dict E”. So it’s not re­ally any differ­ent, but us­ing SHA as our ba­sis, I find cer­tain con­fus­ing ques­tions less con­fus­ing, and cer­tain un­holy temp­ta­tions less tempt­ing.

With that out of the way, let’s have a look at some con­fus­ing ques­tions. First up is the Dooms­day Ar­gu­ment. From La Wik:

Sim­ply put, it says that sup­pos­ing the hu­mans al­ive to­day are in a ran­dom place in the whole hu­man his­tory timeline, chances are we are about halfway through it.

The ar­ti­cle goes on to claim that “There is a 95% chance of ex­tinc­tion within 9120 years.” Hard to re­fute, but nev­er­the­less it makes one rather un­com­fortable that the mere fact of one’s ex­is­tence should have pre­dic­tive con­se­quences.

In re­sponse, Nick Bostrom for­mu­lated the “Self Indi­ca­tion As­sump­tion”, which states that “All other things equal, an ob­server should rea­son as if they are ran­domly se­lected from the set of all pos­si­ble ob­servers.” Ap­plied to the dooms­day ar­gu­ment, it says that you are just as likely to ex­ist in 2014 in a world where hu­man­ity grows up to cre­ate a glo­ri­ous ev­er­last­ing civ­i­liza­tion, as one where we wipe our­selves out in the next hun­dred years, so you can’t up­date on that mere fact of your ex­is­tence. This is com­fort­ing, as it de­fuses the dooms­day ar­gu­ment.

By con­trast, the Dooms­day ar­gu­ment is the con­se­quence of the “Self Sam­pling As­sump­tion”, which states that “All other things equal, an ob­server should rea­son as if they are ran­domly se­lected from the set of all ac­tu­ally ex­is­tent ob­servers (past, pre­sent and fu­ture) in their refer­ence class.”

Un­for­tu­nately for SIA, it im­plies that “Given the fact that you ex­ist, you should (other things equal) fa­vor hy­pothe­ses ac­cord­ing to which many ob­servers ex­ist over hy­pothe­ses on which few ob­servers ex­ist.” Surely that should not fol­low, but clearly it does. So we can for­mu­late an­other an­thropic prob­lem:

It is the year 2100 and physi­cists have nar­rowed down the search for a the­ory of ev­ery­thing to only two re­main­ing plau­si­ble can­di­date the­o­ries, T1 and T2 (us­ing con­sid­er­a­tions from su­per-duper sym­me­try). Ac­cord­ing to T1 the world is very, very big but finite, and there are a to­tal of a trillion trillion ob­servers in the cos­mos. Ac­cord­ing to T2, the world is very, very, very big but finite, and there are a trillion trillion trillion ob­servers. The su­per-duper sym­me­try con­sid­er­a­tions seem to be roughly in­differ­ent be­tween these two the­o­ries. The physi­cists are plan­ning on car­ry­ing out a sim­ple ex­per­i­ment that will falsify one of the the­o­ries. En­ter the pre­sump­tu­ous philoso­pher: “Hey guys, it is com­pletely un­nec­es­sary for you to do the ex­per­i­ment, be­cause I can already show to you that T2 is about a trillion times more likely to be true than T1

This one is called the “pre­sump­tu­ous philoso­pher”. Clearly the pre­sump­tu­ous philoso­pher should not get a No­bel prize.

Th­ese ques­tions have caused much psy­cholog­i­cal dis­tress, and been beaten to death in cer­tain cor­ners of the in­ter­net, but as far as I know, few peo­ple have satis­fac­tory an­swers. Wei Dai’s UDT might be satis­fac­tory for this, and might be equiv­a­lent to my an­swer, when the dust set­tles.

So what’s my ob­jec­tion to these schemes, and what’s my scheme?

My ob­jec­tion is aes­thetic; I don’t like that SIA and SSA seem to place some kind of on­tolog­i­cal spe­cial­ness on “ob­servers”. This re­minds me way too much of souls, which are non­sense. The whole “refer­ence-class” thing rubs me the wrong way as well. Refer­ence classes are use­ful tools for statis­ti­cal ap­prox­i­ma­tion, not fun­da­men­tal fea­tures of episte­mol­ogy. So I’m hes­i­tant to ac­cept these the­o­ries.

In­stead, I take the po­si­tion that you can never con­clude any­thing from your own ex­is­tence ex­cept that you ex­ist. That is, I elimi­nate all hy­pothe­ses that don’t pre­dict my ex­is­tence, and leave it at that, in ac­cor­dance with SHA. No up­date hap­pens in the Dooms­day Ar­gu­ment; both glo­ri­ous fu­tures and im­pend­ing doom are con­sis­tent with my ex­is­tence, their rel­a­tive prob­a­bil­ity comes from other rea­son­ing. And the pre­sump­tu­ous philoso­pher is an idiot be­cause both the­o­ries are con­sis­tent with us ex­ist­ing, so again we get no rel­a­tive up­date.

By rea­son­ing purely from con­sis­tency of pos­si­ble wor­lds with ob­ser­va­tions, SHA gives us a rea­son­ably prin­ci­pled way to just punt on these ques­tions. Let’s see how it does on an­other an­thropic ques­tion, the Sleep­ing Beauty Prob­lem:

Sleep­ing Beauty vol­un­teers to un­dergo the fol­low­ing ex­per­i­ment and is told all of the fol­low­ing de­tails: On Sun­day she will be put to sleep. Once or twice, dur­ing the ex­per­i­ment, Beauty will be wak­ened, in­ter­viewed, and put back to sleep with an am­ne­sia-in­duc­ing drug that makes her for­get that awak­en­ing. A fair coin will be tossed to de­ter­mine which ex­per­i­men­tal pro­ce­dure to un­der­take: if the coin comes up heads, Beauty will be wak­ened and in­ter­viewed on Mon­day only. If the coin comes up tails, she will be wak­ened and in­ter­viewed on Mon­day and Tues­day. In ei­ther case, she will be wak­ened on Wed­nes­day with­out in­ter­view and the ex­per­i­ment ends.

Any time Sleep­ing Beauty is wak­ened and in­ter­viewed, she is asked, “What is your be­lief now for the propo­si­tion that the coin landed heads?”

SHA says that the coin came up heads in half of the wor­lds, and no fur­ther up­date hap­pens based on ex­is­tence. I’m slightly un­com­fortable with this, be­cause SHA is cheer­fully bit­ing a bul­let that has con­fused many philoso­phers. How­ever, I see no rea­son not to bite this bul­let; it doesn’t seem to have any par­tic­u­larly con­tro­ver­sial im­pli­ca­tions for ac­tual de­ci­sion mak­ing. If she is paid for each cor­rect guess, for ex­am­ple, she’ll say that she thinks the coin came up tails (this way she gets $2 half the time in­stead of $1 half the time for heads). If she’s paid only on Mon­day, she’s in­differ­ent be­tween the op­tions, as she should be.

What if we mod­ify the prob­lem slightly, and ask sleep­ing beauty for her cre­dence that it’s Mon­day? That is, her cre­dence that “it” “is” Mon­day. If the coin came up heads, there is only Mon­day, but if it came up tails, there is a Mon­day ob­server and a Tues­day ob­server. AA/​SHA rea­sons purely from the per­spec­tive of pos­si­ble wor­lds, and says that Mon­day is con­sis­tent with ob­ser­va­tions, as is Tues­day, and re­fuses to spec­u­late fur­ther on which “ob­server” among pos­si­ble ob­servers she “is”. Again, given an ac­tual de­ci­sion prob­lem with an ac­tual pay­off struc­ture, AA/​SHA will quickly reach the cor­rect de­ci­sion, even while re­fus­ing to as­sign prob­a­bil­ities “be­tween ob­servers”.

I’d like to say that we’ve ca­su­ally thrown out prob­a­bil­ity the­ory when it be­came in­con­ve­nient, but we haven’t; we’ve just re­fused to an­swer a mean­ingless ques­tion. The mean­ingless­ness of in­dex­i­cal un­cer­tainty be­comes ap­par­ent when you stop be­liev­ing in the spe­cial­ness of ob­servers. It’s like ask­ing “What’s the prob­a­bil­ity that the Sun rather than the Earth?”. That the Sun what? The Sun and the Earth both ex­ist, for ex­am­ple, but maybe you meant some­thing else. Want to know which one this here comet is go­ing to hit? Sure I’ll an­swer that, but these generic “which one” ques­tions are mean­ingless.

Not that I’m fa­mil­iar with UDT, but this re­ally is start­ing to re­mind me of UDT. Per­haps it even is part of UDT. In any case, An­thropic Athe­ism seems to eas­ily give in­tu­itive an­swers to an­thropic ques­tions. Maybe it breaks down on some edge case, though. If so, I’d like to see it. In the mean time, I don’t be­lieve in ob­servers.

ADDENDUM: As Wei Dai, DanielLC, and Tyrrell_McAllister point out be­low, it turns out this doesn’t ac­tu­ally work. The ob­jec­tion is that by re­fus­ing to in­clude the in­dex­i­cal hy­poth­e­sis, we end up fa­vor­ing uni­verses with more va­ri­ety of ex­pe­riences (be­cause they have a high chance of con­tain­ing *our* ex­pe­riences) and sac­ri­fic­ing the abil­ity to pre­dict much of any­thing. Oops. It was fun while it lasted ;)