Boltzmann Brains and Within-model vs. Between-models Probability

Why Boltz­mann brains?

Back in the days of Lud­wig Boltz­mann (be­fore the ~1912 dis­cov­ery of galac­tic red­shift), it seemed like the uni­verse could be ar­bi­trar­ily old. Since the sec­ond law of ther­mo­dy­nam­ics says that ev­ery­thing tends to a state of high en­tropy, a truly old uni­verse would have long ago sub­li­mated into a luke­warm bath of elec­tro­mag­netic ra­di­a­tion (per­haps with a few enor­mous dead stars slowly drift­ing to­wards each other). If this was true, our so­lar sys­tem and the stars we see would be just a bub­ble of or­der in a vast sea of chaos—per­haps spon­ta­neously aris­ing, like how if you keep shuffling a deck of cards, even­tu­ally it will pass through all ar­range­ments, even the or­dered ones.

The only prob­lem is, bub­bles of spon­ta­neous or­der are as­tound­ingly un­likely. How­ever long it takes for one iron star to spon­ta­neously re­verse fu­sion and be­come hy­dro­gen, it takes that length of time squared for two stars to do it. So if we’re try­ing to ex­plain our sub­jec­tive ex­pe­rience within long-lived uni­verse that might have these bub­bles of spon­ta­neous or­der, the most likely guesses are those that in­volve the small­est amount of mat­ter. The clas­sic ex­am­ple is an iso­lated brain with your mem­o­ries and cur­rent ex­pe­rience, briefly con­geal­ing out of the vac­uum be­fore evap­o­rat­ing again.

What now?

The rarest, if sort of ad­mirable, re­sponse is to agree you’re prob­a­bly a Boltz­mann brain, and at all times do the thing that gets you the most plea­sure in the next tenth of a sec­ond. But maybe you sub­scribe to Egan’s law (“Every­thing adds up to nor­mal­ity”), in which case there are ba­si­cally two re­sponses. Either you are prob­a­bly a Boltz­mann brain, but shouldn’t act like it be­cause you only care about long-lived selves, or you think that the uni­verse gen­uinely is long-lived.

Within our best un­der­stand­ing of the ap­par­ent uni­verse, there shouldn’t be any Boltz­mann brains. Ac­cel­er­at­ing ex­pan­sion will drive the uni­verse closer and closer to its low­est en­ergy eigen­state, which sup­presses time evolu­tion. Change in a quan­tum sys­tem comes from en­ergy differ­ences be­tween states, and in a fast-ex­pand­ing uni­verse, those en­ergy differ­ences get red­shifted away.

But that’s only the within-model un­der­stand­ing. Given our sense-data, we should as­sign a prob­a­bil­ity dis­tri­bu­tion over many differ­ent pos­si­ble laws of physics that could ex­plain that sense-data. And if some of those laws of physics re­sult in very large num­ber of Boltz­mann brains of you, does this mean that you should agree with Bostrom’s Pre­sump­tu­ous Philoso­pher, and there­fore as­sign very high prob­a­bil­ity that you’re a Boltz­mann brain, nearly re­gard­less of the prior over laws?

In short, sup­pose that the com­mon-sense uni­verse is the sim­plest ex­pla­na­tion of your ex­pe­riences but con­tains only one copy of them, while a slightly differ­ent uni­verse is more com­plex (say the phys­i­cal laws take 20 more bits to write) but con­tains 10^100 copies of your ex­pe­riences. Should you act as if you’re in the more com­pli­cated uni­verse?

Bostrom and FHI have writ­ten some in­ter­est­ing things about this, and coined some TLAs, but I haven’t read any­thing that re­ally ad­dressed what feels like the fun­da­men­tal ten­sion: be­tween be­liev­ing you’re a Boltz­mann brain on the one hand, and hav­ing an ad-hoc dis­tinc­tion be­tween within-model and be­tween-model prob­a­bil­ities on the other hand.

Chang­ing Oc­cam’s Ra­zor?

Here’s an idea that’s prob­a­bly not origi­nal: what would a Solomonoff in­duc­tor think? A Solomonoff in­duc­tor doesn’t di­rectly rea­son about phys­i­cal laws, it just tries to find the short­est pro­gram that re­pro­duces its ob­ser­va­tions so far. Differ­ent copies of it within the same uni­verse ac­tu­ally cor­re­spond to differ­ent pro­grams—a sim­ple pro­gram that speci­fies the phys­i­cal laws, and then a com­pli­cated in­dex­i­cal pa­ram­e­ter that tells you where in this uni­verse you can find its ob­ser­va­tions so far.

If we pick only the sim­plest pro­gram mak­ing each fu­ture pre­dic­tion (e.g. about the Pre­sump­tu­ous Philoso­pher’s sci­ence ex­per­i­ment), then the num­ber of copies per uni­verse doesn’t mat­ter at all. Even if we are a lit­tle bit more gen­eral and con­sider a prior over the differ­ent gen­er­at­ing pre­fix-free pro­grams, the in­clu­sion of this in­dex­i­cal pa­ram­e­ter in the com­plex­ity of the pro­gram means that even if a uni­verse has in­finite copies of you, there’s still only a limited amount of prob­a­bil­ity as­signed, and the harder they are to lo­cate (like by virtue of be­ing ex­tremely ridicu­lously rare), the more bits they’re pe­nal­ized.

All of this sounds like a re­ally ap­peal­ing re­s­olu­tion. Ex­cept… it’s con­trary to our cur­rent us­age of Oc­cam’s ra­zor. Un­der the cur­rent frame­work, the prior prob­a­bil­ity of a hy­poth­e­sis about physics de­pends only on the com­plex­ity of the phys­i­cal laws—no terms for ini­tial con­di­tions, and cer­tainly no de­pen­dence on who ex­actly you are and how com­pu­ta­tion­ally easy it is to spec­ify you in those laws. We even have less­wrong posts cor­rect­ing peo­ple who think Oc­cam’s ra­zor pe­nal­izes phys­i­cal the­o­ries with billions and billions of stars. But if we now have a penalty for the difficulty to lo­cate our­selves within the uni­verse, a part of this penalty looks like the log of the num­ber of stars! I’m not to­tally sure whether this is a bug or a fea­ture yet.