Are you in a Boltzmann simulation?

EDIT: Donald Hobson has pointed out a mistake in the reasoning in the section on nucleation. If we know that an area of space-time has a disproportionately high number of observer moments, then it is very likely that these are from long-lived Boltzmann simulations. However, this does not imply, as I thought, that most observer moments are in long-lived Boltzmann simulations.

Most people on LessWrong are familiar with the concept of Boltzmann brains—conscious brains that are randomly created by quantum or thermodynamic interactions, and then swiftly vanish.

There seem to be two types of Boltzmann brains: quantum fluctuations, and nucleated brains (actual brains produced in the vacuum of space by the expanding universe).

The quantum fluctuation brains cannot be observed (the fluctuation dies down without producing any observable effect: there’s no decoherence or permanence). If I’m reading these papers right, the probability of producing any given object of duration and mass is approximately

We’ll be taking for a human brain, and for it having a single coherent though.

A few notes about this number. First of all, it is vanishingly small, as an exponential of a negative exponential. It’s so small, in fact, that we don’t really care too much over what volume of space-time we’re calculating this probability. Over a Plank length four-volume, over a metre to the fourth power, or over a Hubble volume for 15 billion years: the probabilities of an object being produced in any of these spaces are approximately all of the same magnitude, (more properly, the probabilities vary tremendously, but any tiny uncertainty in the term dwarfs all these changes).

Similarly, we don’t need to consider the entropy of producing a specific brain (or any other structure). A small change in mass overwhelms the probability of a specific mass setup being produced. Here’s a rough argument for that: the Bekenstein bound puts a limit on the number of bits of information in a volume of space of given size and given mass (or energy). For a mass and radius , it is approximately

Putting and , we get that the number of possible different states in brain-like object of brain-like mass and size is less than

which is much, much, much, …, much, much less than the inverse of .

Quantum fluctuations and causality

What is interesting is that the probability expression is exponentially linear in :

Therefore it seems that the probability of producing one brain of duration , and another independent brain of duration , is the same as producing one brain of duration . Thus it seems that there is no causality in quantum fluctuating Boltzmann brains: any brain produced of long duration is merely a sequence of smaller brain moments that happen to be coincidentally following each other (though I may have misunderstood the papers here).

Nucleation and Boltzmann simulations

If we understand dark energy correctly, it will transform our universe into a de Sitter universe. In such a universe, the continuing expansion of the universe acts like the event horizon of a black hole, and sometimes, spontaneous objects will be created, similarly to Hawking radiation. Thus a de Sitter space can nucleate: spontaneously create objects. The probability of a given object of mass being produced is given as

This number is much, much, much, much, …., much, much, much, much smaller than the quantum fluctuation probability. But notice something interesting about it: it has no time component. Indeed, the objects produced by nucleation are actual objects: they endure. Think a brain in a sealed jar, floating through space.

Now, a normal brain in empty space (and almost absolute zero-temperatures) will decay at once; let’s be generous, and give it a second of survival in something like a functioning state.

EDIT: there is a mistake in the following, see here.

Creating independent one-second brains is an event of probability:

But creating a brain that lasts for seconds will be an event of probability

where is the minimum mass required to keep the brain running for seconds.

It’s clear that can be way below . For example, the longest moonwalk was 7 h 36 min 56 s (Apollo 17, second moonwalk), or 27,416 seconds. To do this the astronauts used spacesuits of mass around 82kg. If you estimate that their own body mass was roughly 100kg, we get .

This means that for nucleated Boltzmann brains, unlike for quantum fluctuations, most observer moments will be parts of long lived individuals, with a life experience that respects causality.

And we can get much much more efficient than that. Since mass is the real limit, there’s no problem in using anti-matter as source of energy. The human brain runs at about 20 watts; one half gram of matter with one half gram of anti-matter produces enough energy to run this for about seconds, or 140 thousand years. Now, granted, you’ll need a larger infrastructure to extract and make use of this energy, and to shield and repair the brain; however, this larger infrastructure doesn’t need to have a mass anywhere near kilos (which is the order of magnitude of the mass of the moon itself).

And that’s all neglecting improvements to the energy efficiency and durability of the brain. It seems that the most efficient and durable version of a brain—in terms of mass, which is the only thing that matters here—is to run the brain on a small but resilient computer, with as much power as we’d want. And, if we get to re-use code, then we can run many brains on a slightly larger computer, with the mass growth being less than the growth in the number of brains.

Thus, most nucleated Boltzmann brain observer-moments will be inside a Boltzmann simulation: a spontaneous (and causal) computer simulation created in the deep darkness of space.