Are you in a Boltzmann sim­u­la­tion?

EDIT: Don­ald Hob­son has poin­ted out a mis­take in the reas­on­ing in the sec­tion on nuc­le­ation. If we know that an area of space-time has a dis­pro­por­tion­ately high num­ber of ob­server mo­ments, then it is very likely that these are from long-lived Boltzmann sim­u­la­tions. However, this does not im­ply, as I thought, that most ob­server mo­ments are in long-lived Boltzmann sim­u­la­tions.

Most people on LessWrong are fa­mil­iar with the concept of Boltzmann brains—con­scious brains that are ran­domly cre­ated by quantum or ther­mo­dy­namic in­ter­ac­tions, and then swiftly van­ish.

There seem to be two types of Boltzmann brains: quantum fluc­tu­ations, and nuc­le­ated brains (ac­tual brains pro­duced in the va­cuum of space by the ex­pand­ing uni­verse).

The quantum fluc­tu­ation brains can­not be ob­served (the fluc­tu­ation dies down without pro­du­cing any ob­serv­able ef­fect: there’s no de­co­her­ence or per­man­ence). If I’m read­ing these pa­pers right, the prob­ab­il­ity of pro­du­cing any given ob­ject of dur­a­tion and mass is approximately

We’ll be tak­ing for a hu­man brain, and for it hav­ing a single co­her­ent though.

A few notes about this num­ber. First of all, it is van­ish­ingly small, as an ex­po­nen­tial of a neg­at­ive ex­po­nen­tial. It’s so small, in fact, that we don’t really care too much over what volume of space-time we’re cal­cu­lat­ing this prob­ab­il­ity. Over a Plank length four-volume, over a metre to the fourth power, or over a Hubble volume for 15 bil­lion years: the prob­ab­il­it­ies of an ob­ject be­ing pro­duced in any of these spaces are ap­prox­im­ately all of the same mag­nitude, (more prop­erly, the prob­ab­il­it­ies vary tre­mend­ously, but any tiny un­cer­tainty in the term dwarfs all these changes).

Sim­il­arly, we don’t need to con­sider the en­tropy of pro­du­cing a spe­cific brain (or any other struc­ture). A small change in mass over­whelms the prob­ab­il­ity of a spe­cific mass setup be­ing pro­duced. Here’s a rough ar­gu­ment for that: the Bek­en­stein bound puts a limit on the num­ber of bits of in­form­a­tion in a volume of space of given size and given mass (or en­ergy). For a mass and ra­dius , it is approximately

Put­ting and , we get that the num­ber of pos­sible dif­fer­ent states in brain-like ob­ject of brain-like mass and size is less than

which is much, much, much, …, much, much less than the in­verse of .

Quantum fluc­tu­ations and causality

What is in­ter­est­ing is that the prob­ab­il­ity ex­pres­sion is ex­po­nen­tially lin­ear in :

There­fore it seems that the prob­ab­il­ity of pro­du­cing one brain of dur­a­tion , and an­other in­de­pend­ent brain of dur­a­tion , is the same as pro­du­cing one brain of dur­a­tion . Thus it seems that there is no caus­al­ity in quantum fluc­tu­at­ing Boltzmann brains: any brain pro­duced of long dur­a­tion is merely a se­quence of smal­ler brain mo­ments that hap­pen to be co­in­cid­ent­ally fol­low­ing each other (though I may have mis­un­der­stood the pa­pers here).

Nuc­le­ation and Boltzmann simulations

If we un­der­stand dark en­ergy cor­rectly, it will trans­form our uni­verse into a de Sit­ter uni­verse. In such a uni­verse, the con­tinu­ing ex­pan­sion of the uni­verse acts like the event ho­ri­zon of a black hole, and some­times, spon­tan­eous ob­jects will be cre­ated, sim­il­arly to Hawk­ing ra­di­ation. Thus a de Sit­ter space can nuc­le­ate: spon­tan­eously cre­ate ob­jects. The prob­ab­il­ity of a given ob­ject of mass be­ing pro­duced is given as

This num­ber is much, much, much, much, …., much, much, much, much smal­ler than the quantum fluc­tu­ation prob­ab­il­ity. But no­tice some­thing in­ter­est­ing about it: it has no time com­pon­ent. Indeed, the ob­jects pro­duced by nuc­le­ation are ac­tual ob­jects: they en­dure. Think a brain in a sealed jar, float­ing through space.

Now, a nor­mal brain in empty space (and al­most ab­so­lute zero-tem­per­at­ures) will de­cay at once; let’s be gen­er­ous, and give it a second of sur­vival in some­thing like a func­tion­ing state.

EDIT: there is a mis­take in the fol­low­ing, see here.

Creat­ing in­de­pend­ent one-second brains is an event of prob­ab­il­ity:

But cre­at­ing a brain that lasts for seconds will be an event of probability

where is the min­imum mass re­quired to keep the brain run­ning for seconds.

It’s clear that can be way be­low . For ex­ample, the longest moon­walk was 7 h 36 min 56 s (Apollo 17, second moon­walk), or 27,416 seconds. To do this the as­tro­nauts used space­suits of mass around 82kg. If you es­tim­ate that their own body mass was roughly 100kg, we get .

This means that for nuc­le­ated Boltzmann brains, un­like for quantum fluc­tu­ations, most ob­server mo­ments will be parts of long lived in­di­vidu­als, with a life ex­per­i­ence that re­spects caus­al­ity.

And we can get much much more ef­fi­cient than that. Since mass is the real limit, there’s no prob­lem in us­ing anti-mat­ter as source of en­ergy. The hu­man brain runs at about 20 watts; one half gram of mat­ter with one half gram of anti-mat­ter pro­duces enough en­ergy to run this for about seconds, or 140 thou­sand years. Now, gran­ted, you’ll need a lar­ger in­fra­struc­ture to ex­tract and make use of this en­ergy, and to shield and re­pair the brain; how­ever, this lar­ger in­fra­struc­ture doesn’t need to have a mass any­where near kilos (which is the or­der of mag­nitude of the mass of the moon it­self).

And that’s all neg­lect­ing im­prove­ments to the en­ergy ef­fi­ciency and dur­ab­il­ity of the brain. It seems that the most ef­fi­cient and dur­able ver­sion of a brain—in terms of mass, which is the only thing that mat­ters here—is to run the brain on a small but re­si­li­ent com­puter, with as much power as we’d want. And, if we get to re-use code, then we can run many brains on a slightly lar­ger com­puter, with the mass growth be­ing less than the growth in the num­ber of brains.

Thus, most nuc­le­ated Boltzmann brain ob­server-mo­ments will be in­side a Boltzmann sim­u­la­tion: a spon­tan­eous (and causal) com­puter sim­u­la­tion cre­ated in the deep dark­ness of space.