Forcing Anthropics: Boltzmann Brains

Fol­lowup to: An­thropic Rea­son­ing in UDT by Wei Dai

Sup­pose that I flip a log­i­cal coin—e.g. look at some bi­nary digit of pi un­known to ei­ther of us—and de­pend­ing on the re­sult, ei­ther cre­ate a billion of you in green rooms and one of you in a red room if the coin came up 1; or, if the coin came up 0, cre­ate one of you in a green room and a billion of you in red rooms. You go to sleep at the start of the ex­per­i­ment, and wake up in a red room.

Do you rea­son that the coin very prob­a­bly came up 0? Think­ing, per­haps: “If the coin came up 1, there’d be a billion of me in green rooms and only one of me in a red room, and in that case, it’d be very sur­pris­ing that I found my­self in a red room.”

What is your de­gree of sub­jec­tive cre­dence—your pos­te­rior prob­a­bil­ity—that the log­i­cal coin came up 1?

There are only two an­swers I can see that might in prin­ci­ple be co­her­ent, and they are “50%” and “a billion to one against”.

To­mor­row I’ll talk about what sort of trou­ble you run into if you re­ply “a billion to one”.

But for to­day, sup­pose you re­ply “50%”. Think­ing, per­haps: “I don’t un­der­stand this whole con­scious­ness riga­ma­role, I wouldn’t try to pro­gram a com­puter to up­date on it, and I’m not go­ing to up­date on it my­self.”

In that case, why don’t you be­lieve you’re a Boltz­mann brain?

Back when the laws of ther­mo­dy­nam­ics were be­ing worked out, there was first asked the ques­tion: “Why did the uni­verse seem to start from a con­di­tion of low en­tropy?” Boltz­mann sug­gested that the larger uni­verse was in a state of high en­tropy, but that, given a long enough time, re­gions of low en­tropy would spon­ta­neously oc­cur—wait long enough, and the egg will un­scram­ble it­self—and that our own uni­verse was such a re­gion.

The prob­lem with this ex­pla­na­tion is now known as the “Boltz­mann brain” prob­lem; namely, while Hub­ble-re­gion-sized low-en­tropy fluc­tu­a­tions will oc­ca­sion­ally oc­cur, it would be far more likely—though still not likely in any ab­solute sense—for a hand­ful of par­ti­cles to come to­gether in a con­figu­ra­tion perform­ing a com­pu­ta­tion that lasted just long enough to think a sin­gle con­scious thought (what­ever that means) be­fore dis­solv­ing back into chaos. A ran­dom re­verse-en­tropy fluc­tu­a­tion is ex­po­nen­tially vastly more likely to take place in a small re­gion than a large one.

So on Boltz­mann’s at­tempt to ex­plain the low-en­tropy ini­tial con­di­tion of the uni­verse as a ran­dom statis­ti­cal fluc­tu­a­tion, it’s far more likely that we are a lit­tle blob of chaos tem­porar­ily hal­lu­ci­nat­ing the rest of the uni­verse, than that a multi-billion-light-year re­gion spon­ta­neously or­dered it­self. And most such lit­tle blobs of chaos will dis­solve in the next mo­ment.

“Well,” you say, “that may be an un­pleas­ant pre­dic­tion, but that’s no li­cense to re­ject it.” But wait, it gets worse: The vast ma­jor­ity of Boltz­mann brains have ex­pe­riences much less or­dered than what you’re see­ing right now. Even if a blob of chaos coughs up a vi­sual cor­tex (or equiv­a­lent), that vi­sual cor­tex is un­likely to see a highly or­dered vi­sual field—the vast ma­jor­ity of pos­si­ble vi­sual fields more closely re­sem­ble “static on a tele­vi­sion screen” than “words on a com­puter screen”. So on the Boltz­mann hy­poth­e­sis, highly or­dered ex­pe­riences like the ones we are hav­ing now, con­sti­tute an ex­po­nen­tially in­finites­i­mal frac­tion of all ex­pe­riences.

In con­trast, sup­pose one more sim­ple law of physics not presently un­der­stood, which forces the ini­tial con­di­tion of the uni­verse to be low-en­tropy. Then the ex­po­nen­tially vast ma­jor­ity of brains oc­cur as the re­sult of or­dered pro­cesses in or­dered re­gions, and it’s not at all sur­pris­ing that we find our­selves hav­ing or­dered ex­pe­riences.

But wait! This is just the same sort of logic (is it?) that one would use to say, “Well, if the log­i­cal coin came up heads, then it’s very sur­pris­ing to find my­self in a red room, since the vast ma­jor­ity of peo­ple-like-me are in green rooms; but if the log­i­cal coin came up tails, then most of me are in red rooms, and it’s not sur­pris­ing that I’m in a red room.”

If you re­ject that rea­son­ing, say­ing, “There’s only one me, and that per­son see­ing a red room does ex­ist, even if the log­i­cal coin came up heads” then you should have no trou­ble say­ing, “There’s only one me, hav­ing a highly or­dered ex­pe­rience, and that per­son ex­ists even if all ex­pe­riences are gen­er­ated at ran­dom by a Boltz­mann-brain pro­cess or some­thing similar to it.” And fur­ther­more, the Boltz­mann-brain pro­cess is a much sim­pler pro­cess—it could oc­cur with only the barest sort of causal struc­ture, no need to pos­tu­late the full com­plex­ity of our own hal­lu­ci­nated uni­verse. So if you’re not up­dat­ing on the ap­par­ent con­di­tional rar­ity of hav­ing a highly or­dered ex­pe­rience of grav­ity, then you should just be­lieve the very sim­ple hy­poth­e­sis of a high-vol­ume ran­dom ex­pe­rience gen­er­a­tor, which would nec­es­sar­ily cre­ate your cur­rent ex­pe­riences—albeit with ex­treme rel­a­tive in­fre­quency, but you don’t care about that.

Now, doesn’t the Boltz­mann-brain hy­poth­e­sis also pre­dict that re­al­ity will dis­solve into chaos in the next mo­ment? Well, it pre­dicts that the vast ma­jor­ity of blobs who ex­pe­rience this mo­ment, cease to ex­ist af­ter; and that among the few who don’t dis­solve, the vast ma­jor­ity of those ex­pe­rience chaotic suc­ces­sors. But there would be an in­finites­i­mal frac­tion of a frac­tion of suc­ces­sors, who ex­pe­rience or­dered suc­ces­sor-states as well. And you’re not alarmed by the rar­ity of those suc­ces­sors, just as you’re not alarmed by the rar­ity of wak­ing up in a red room if the log­i­cal coin came up 1 - right?

So even though your friend is stand­ing right next to you, say­ing, “I pre­dict the sky will not turn into green pump­kins and ex­plode—oh, look, I was suc­cess­ful again!”, you are not dis­turbed by their un­bro­ken string of suc­cesses. You just keep on say­ing, “Well, it was nec­es­sar­ily true that some­one would have an or­dered suc­ces­sor ex­pe­rience, on the Boltz­mann-brain hy­poth­e­sis, and that just hap­pens to be us, but in the next in­stant I will sprout wings and fly away.”

Now this is not quite a log­i­cal con­tra­dic­tion. But the to­tal re­jec­tion of all sci­ence, in­duc­tion, and in­fer­ence in fa­vor of an un­re­lin­quish­able faith that the next mo­ment will dis­solve into pure chaos, is suffi­ciently un­palat­able that even I de­cline to bite that bul­let.

And so I still can’t seem to dis­pense with an­thropic rea­son­ing—I can’t seem to dis­pense with try­ing to think about how many of me or how much of me there are, which in turn re­quires that I think about what sort of pro­cess con­sti­tutes a me. Even though I con­fess my­self to be sorely con­fused, about what could pos­si­bly make a cer­tain com­pu­ta­tion “real” or “not real”, or how some uni­verses and ex­pe­riences could be quan­ti­ta­tively realer than oth­ers (pos­sess more re­al­ity-fluid, as ’twere), and I still don’t know what ex­actly makes a causal pro­cess count as some­thing I might have been for pur­poses of be­ing sur­prised to find my­self as me, or for that mat­ter, what ex­actly is a causal pro­cess.

In­deed this is all greatly and ter­ribly con­fus­ing unto me, and I would be less con­fused if I could go through life while only an­swer­ing ques­tions like “Given the Peano ax­ioms, what is SS0 + SS0?”

But then I have no defense against the one who says to me, “Why don’t you think you’re a Boltz­mann brain? Why don’t you think you’re the re­sult of an all-pos­si­ble-ex­pe­riences gen­er­a­tor? Why don’t you think that grav­ity is a mat­ter of branch­ing wor­lds in which all ob­jects ac­cel­er­ate in all di­rec­tions and in some wor­lds all the ob­served ob­jects hap­pen to be ac­cel­er­at­ing down­ward? It ex­plains all your ob­ser­va­tions, in the sense of log­i­cally ne­ces­si­tat­ing them.”

I want to re­ply, “But then most peo­ple don’t have ex­pe­riences this or­dered, so find­ing my­self with an or­dered ex­pe­rience is, on your hy­poth­e­sis, very sur­pris­ing. Even if there are some ver­sions of me that ex­ist in re­gions or uni­verses where they arose by chaotic chance, I an­ti­ci­pate, for pur­poses of pre­dict­ing my fu­ture ex­pe­riences, that most of my ex­is­tence is en­coded in re­gions and uni­verses where I am the product of or­dered pro­cesses.”

And I cur­rently know of no way to re­ply thusly, that does not make use of poorly defined con­cepts like “num­ber of real pro­cesses” or “amount of real pro­cesses”; and “peo­ple”, and “me”, and “an­ti­ci­pate” and “fu­ture ex­pe­rience”.

Of course con­fu­sion ex­ists in the mind, not in re­al­ity, and it would not be the least bit sur­pris­ing if a re­s­olu­tion of this prob­lem were to dis­pense with such no­tions as “real” and “peo­ple” and “my fu­ture”. But I do not presently have that re­s­olu­tion.

(To­mor­row I will ar­gue that an­thropic up­dates must be ille­gal and that the cor­rect an­swer to the origi­nal prob­lem must be “50%”.)