Boltzmann Brains and Within-model vs. Between-models Probability

Why Boltzmann brains?

Back in the days of Ludwig Boltzmann (before the ~1912 discovery of galactic redshift), it seemed like the universe could be arbitrarily old. Since the second law of thermodynamics says that everything tends to a state of high entropy, a truly old universe would have long ago sublimated into a lukewarm bath of electromagnetic radiation (perhaps with a few enormous dead stars slowly drifting towards each other). If this was true, our solar system and the stars we see would be just a bubble of order in a vast sea of chaos—perhaps spontaneously arising, like how if you keep shuffling a deck of cards, eventually it will pass through all arrangements, even the ordered ones.

The only problem is, bubbles of spontaneous order are astoundingly unlikely. However long it takes for one iron star to spontaneously reverse fusion and become hydrogen, it takes that length of time squared for two stars to do it. So if we’re trying to explain our subjective experience within long-lived universe that might have these bubbles of spontaneous order, the most likely guesses are those that involve the smallest amount of matter. The classic example is an isolated brain with your memories and current experience, briefly congealing out of the vacuum before evaporating again.

What now?

The rarest, if sort of admirable, response is to agree you’re probably a Boltzmann brain, and at all times do the thing that gets you the most pleasure in the next tenth of a second. But maybe you subscribe to Egan’s law (“Everything adds up to normality”), in which case there are basically two responses. Either you are probably a Boltzmann brain, but shouldn’t act like it because you only care about long-lived selves, or you think that the universe genuinely is long-lived.

Within our best understanding of the apparent universe, there shouldn’t be any Boltzmann brains. Accelerating expansion will drive the universe closer and closer to its lowest energy eigenstate, which suppresses time evolution. Change in a quantum system comes from energy differences between states, and in a fast-expanding universe, those energy differences get redshifted away.

But that’s only the within-model understanding. Given our sense-data, we should assign a probability distribution over many different possible laws of physics that could explain that sense-data. And if some of those laws of physics result in very large number of Boltzmann brains of you, does this mean that you should agree with Bostrom’s Presumptuous Philosopher, and therefore assign very high probability that you’re a Boltzmann brain, nearly regardless of the prior over laws?

In short, suppose that the common-sense universe is the simplest explanation of your experiences but contains only one copy of them, while a slightly different universe is more complex (say the physical laws take 20 more bits to write) but contains 10^100 copies of your experiences. Should you act as if you’re in the more complicated universe?

Bostrom and FHI have written some interesting things about this, and coined some TLAs, but I haven’t read anything that really addressed what feels like the fundamental tension: between believing you’re a Boltzmann brain on the one hand, and having an ad-hoc distinction between within-model and between-model probabilities on the other hand.

Changing Occam’s Razor?

Here’s an idea that’s probably not original: what would a Solomonoff inductor think? A Solomonoff inductor doesn’t directly reason about physical laws, it just tries to find the shortest program that reproduces its observations so far. Different copies of it within the same universe actually correspond to different programs—a simple program that specifies the physical laws, and then a complicated indexical parameter that tells you where in this universe you can find its observations so far.

If we pick only the simplest program making each future prediction (e.g. about the Presumptuous Philosopher’s science experiment), then the number of copies per universe doesn’t matter at all. Even if we are a little bit more general and consider a prior over the different generating prefix-free programs, the inclusion of this indexical parameter in the complexity of the program means that even if a universe has infinite copies of you, there’s still only a limited amount of probability assigned, and the harder they are to locate (like by virtue of being extremely ridiculously rare), the more bits they’re penalized.

All of this sounds like a really appealing resolution. Except… it’s contrary to our current usage of Occam’s razor. Under the current framework, the prior probability of a hypothesis about physics depends only on the complexity of the physical laws—no terms for initial conditions, and certainly no dependence on who exactly you are and how computationally easy it is to specify you in those laws. We even have lesswrong posts correcting people who think Occam’s razor penalizes physical theories with billions and billions of stars. But if we now have a penalty for the difficulty to locate ourselves within the universe, a part of this penalty looks like the log of the number of stars! I’m not totally sure whether this is a bug or a feature yet.