Boltzmann Brains, Simulations and self refuting hypothesis

Let’s sup­pose, for the pur­poses of this post, that our best model of dark en­ergy is such that an ex­po­nen­tially vast num­ber of Boltz­man brains will ex­ist in the far fu­ture. The idea that we may be in an an­ces­tor simu­la­tion is similar in its self re­fut­ing na­ture but slightly va­guer, as it de­pends on the likely goals of fu­ture so­cieties.

What do I mean when I say that these ar­gu­ments are self re­fut­ing? I mean that ac­cept­ing the con­clu­sion seems to give a good rea­son to re­ject the premise. Once you ac­tu­ally ac­cept that you are a Boltz­mann brain, all your rea­son­ing about the na­ture of dark en­ergy be­comes ran­dom noise. There is no rea­son to think that you have the slight­est clue about how the uni­verse works. We seem to be get­ting ev­i­dence that all our ev­i­dence is non­sense, in­clud­ing the ev­i­dence that told us that. The same holds for the simu­la­tion hy­poth­e­sis, un­less you con­jec­ture that all civ­i­liza­tions make an­ces­tor simu­la­tions al­most ex­clu­sively.

What’s ac­tu­ally go­ing on here. We have three hy­pothe­ses.

1) No Boltz­mann brains, the magic dark en­ergy fairy stops them be­ing cre­ated some­how. (Uni­verse A)

2) Boltz­mann brains ex­ist, And I am not one. (Uni­verse B)

3) I am a Boltz­mann brain. (Uni­verse B)

As all these hy­poth­e­sis fit the data, we have to tell them apart on pri­ors, and an­thropic de­ci­sion the­ory, with the con­fu­sion com­ing from not hav­ing de­cided on an an­thropic the­ory to use, but ad-lib-ing it with in­tu­ition.

SIA Selects from all pos­si­ble ob­servers, and so tells you that 3) is by far the most likely.

SSA, with an Ocamian Prior says that Uni­verse B is slightly more likely, be­cause it takes fewer bits to spec­ify. How­ever most of the ob­servers in Uni­verse B are Boltz­mann brains see­ing ran­dom gib­ber­ish. The ob­ser­va­tion of any kind of pat­tern gives an over­whelming up­date to­wards op­tion 1).

If we choose to min­i­mize the sum of both the amount of info needed to de­scribe the uni­verse, and the amount needed to spec­ify your place within it, then we find that Uni­verse B is sim­pler to de­scribe, and it is far eas­ier to de­scribe the po­si­tion of an evolved life-form near the be­gin­ning of time, than to lo­cate a Boltz­mann brain around years in. An AIXI that is simu­lat­ing the rest of the uni­verse, with patch­ing rules to match its ac­tions up to the world, will act as if it be­lieves op­tion 2).