# Forcing Anthropics: Boltzmann Brains

Fol­lowup to: An­thropic Rea­son­ing in UDT by Wei Dai

Sup­pose that I flip a log­i­cal coin—e.g. look at some bi­nary digit of pi un­known to ei­ther of us—and de­pend­ing on the re­sult, ei­ther cre­ate a billion of you in green rooms and one of you in a red room if the coin came up 1; or, if the coin came up 0, cre­ate one of you in a green room and a billion of you in red rooms. You go to sleep at the start of the ex­per­i­ment, and wake up in a red room.

Do you rea­son that the coin very prob­a­bly came up 0? Think­ing, per­haps: “If the coin came up 1, there’d be a billion of me in green rooms and only one of me in a red room, and in that case, it’d be very sur­pris­ing that I found my­self in a red room.”

What is your de­gree of sub­jec­tive cre­dence—your pos­te­rior prob­a­bil­ity—that the log­i­cal coin came up 1?

There are only two an­swers I can see that might in prin­ci­ple be co­her­ent, and they are “50%” and “a billion to one against”.

To­mor­row I’ll talk about what sort of trou­ble you run into if you re­ply “a billion to one”.

But for to­day, sup­pose you re­ply “50%”. Think­ing, per­haps: “I don’t un­der­stand this whole con­scious­ness riga­ma­role, I wouldn’t try to pro­gram a com­puter to up­date on it, and I’m not go­ing to up­date on it my­self.”

In that case, why don’t you be­lieve you’re a Boltz­mann brain?

Back when the laws of ther­mo­dy­nam­ics were be­ing worked out, there was first asked the ques­tion: “Why did the uni­verse seem to start from a con­di­tion of low en­tropy?” Boltz­mann sug­gested that the larger uni­verse was in a state of high en­tropy, but that, given a long enough time, re­gions of low en­tropy would spon­ta­neously oc­cur—wait long enough, and the egg will un­scram­ble it­self—and that our own uni­verse was such a re­gion.

The prob­lem with this ex­pla­na­tion is now known as the “Boltz­mann brain” prob­lem; namely, while Hub­ble-re­gion-sized low-en­tropy fluc­tu­a­tions will oc­ca­sion­ally oc­cur, it would be far more likely—though still not likely in any ab­solute sense—for a hand­ful of par­ti­cles to come to­gether in a con­figu­ra­tion perform­ing a com­pu­ta­tion that lasted just long enough to think a sin­gle con­scious thought (what­ever that means) be­fore dis­solv­ing back into chaos. A ran­dom re­verse-en­tropy fluc­tu­a­tion is ex­po­nen­tially vastly more likely to take place in a small re­gion than a large one.

So on Boltz­mann’s at­tempt to ex­plain the low-en­tropy ini­tial con­di­tion of the uni­verse as a ran­dom statis­ti­cal fluc­tu­a­tion, it’s far more likely that we are a lit­tle blob of chaos tem­porar­ily hal­lu­ci­nat­ing the rest of the uni­verse, than that a multi-billion-light-year re­gion spon­ta­neously or­dered it­self. And most such lit­tle blobs of chaos will dis­solve in the next mo­ment.

“Well,” you say, “that may be an un­pleas­ant pre­dic­tion, but that’s no li­cense to re­ject it.” But wait, it gets worse: The vast ma­jor­ity of Boltz­mann brains have ex­pe­riences much less or­dered than what you’re see­ing right now. Even if a blob of chaos coughs up a vi­sual cor­tex (or equiv­a­lent), that vi­sual cor­tex is un­likely to see a highly or­dered vi­sual field—the vast ma­jor­ity of pos­si­ble vi­sual fields more closely re­sem­ble “static on a tele­vi­sion screen” than “words on a com­puter screen”. So on the Boltz­mann hy­poth­e­sis, highly or­dered ex­pe­riences like the ones we are hav­ing now, con­sti­tute an ex­po­nen­tially in­finites­i­mal frac­tion of all ex­pe­riences.

In con­trast, sup­pose one more sim­ple law of physics not presently un­der­stood, which forces the ini­tial con­di­tion of the uni­verse to be low-en­tropy. Then the ex­po­nen­tially vast ma­jor­ity of brains oc­cur as the re­sult of or­dered pro­cesses in or­dered re­gions, and it’s not at all sur­pris­ing that we find our­selves hav­ing or­dered ex­pe­riences.

But wait! This is just the same sort of logic (is it?) that one would use to say, “Well, if the log­i­cal coin came up heads, then it’s very sur­pris­ing to find my­self in a red room, since the vast ma­jor­ity of peo­ple-like-me are in green rooms; but if the log­i­cal coin came up tails, then most of me are in red rooms, and it’s not sur­pris­ing that I’m in a red room.”

If you re­ject that rea­son­ing, say­ing, “There’s only one me, and that per­son see­ing a red room does ex­ist, even if the log­i­cal coin came up heads” then you should have no trou­ble say­ing, “There’s only one me, hav­ing a highly or­dered ex­pe­rience, and that per­son ex­ists even if all ex­pe­riences are gen­er­ated at ran­dom by a Boltz­mann-brain pro­cess or some­thing similar to it.” And fur­ther­more, the Boltz­mann-brain pro­cess is a much sim­pler pro­cess—it could oc­cur with only the barest sort of causal struc­ture, no need to pos­tu­late the full com­plex­ity of our own hal­lu­ci­nated uni­verse. So if you’re not up­dat­ing on the ap­par­ent con­di­tional rar­ity of hav­ing a highly or­dered ex­pe­rience of grav­ity, then you should just be­lieve the very sim­ple hy­poth­e­sis of a high-vol­ume ran­dom ex­pe­rience gen­er­a­tor, which would nec­es­sar­ily cre­ate your cur­rent ex­pe­riences—albeit with ex­treme rel­a­tive in­fre­quency, but you don’t care about that.

Now, doesn’t the Boltz­mann-brain hy­poth­e­sis also pre­dict that re­al­ity will dis­solve into chaos in the next mo­ment? Well, it pre­dicts that the vast ma­jor­ity of blobs who ex­pe­rience this mo­ment, cease to ex­ist af­ter; and that among the few who don’t dis­solve, the vast ma­jor­ity of those ex­pe­rience chaotic suc­ces­sors. But there would be an in­finites­i­mal frac­tion of a frac­tion of suc­ces­sors, who ex­pe­rience or­dered suc­ces­sor-states as well. And you’re not alarmed by the rar­ity of those suc­ces­sors, just as you’re not alarmed by the rar­ity of wak­ing up in a red room if the log­i­cal coin came up 1 - right?

So even though your friend is stand­ing right next to you, say­ing, “I pre­dict the sky will not turn into green pump­kins and ex­plode—oh, look, I was suc­cess­ful again!”, you are not dis­turbed by their un­bro­ken string of suc­cesses. You just keep on say­ing, “Well, it was nec­es­sar­ily true that some­one would have an or­dered suc­ces­sor ex­pe­rience, on the Boltz­mann-brain hy­poth­e­sis, and that just hap­pens to be us, but in the next in­stant I will sprout wings and fly away.”

Now this is not quite a log­i­cal con­tra­dic­tion. But the to­tal re­jec­tion of all sci­ence, in­duc­tion, and in­fer­ence in fa­vor of an un­re­lin­quish­able faith that the next mo­ment will dis­solve into pure chaos, is suffi­ciently un­palat­able that even I de­cline to bite that bul­let.

And so I still can’t seem to dis­pense with an­thropic rea­son­ing—I can’t seem to dis­pense with try­ing to think about how many of me or how much of me there are, which in turn re­quires that I think about what sort of pro­cess con­sti­tutes a me. Even though I con­fess my­self to be sorely con­fused, about what could pos­si­bly make a cer­tain com­pu­ta­tion “real” or “not real”, or how some uni­verses and ex­pe­riences could be quan­ti­ta­tively realer than oth­ers (pos­sess more re­al­ity-fluid, as ’twere), and I still don’t know what ex­actly makes a causal pro­cess count as some­thing I might have been for pur­poses of be­ing sur­prised to find my­self as me, or for that mat­ter, what ex­actly is a causal pro­cess.

In­deed this is all greatly and ter­ribly con­fus­ing unto me, and I would be less con­fused if I could go through life while only an­swer­ing ques­tions like “Given the Peano ax­ioms, what is SS0 + SS0?”

But then I have no defense against the one who says to me, “Why don’t you think you’re a Boltz­mann brain? Why don’t you think you’re the re­sult of an all-pos­si­ble-ex­pe­riences gen­er­a­tor? Why don’t you think that grav­ity is a mat­ter of branch­ing wor­lds in which all ob­jects ac­cel­er­ate in all di­rec­tions and in some wor­lds all the ob­served ob­jects hap­pen to be ac­cel­er­at­ing down­ward? It ex­plains all your ob­ser­va­tions, in the sense of log­i­cally ne­ces­si­tat­ing them.”

I want to re­ply, “But then most peo­ple don’t have ex­pe­riences this or­dered, so find­ing my­self with an or­dered ex­pe­rience is, on your hy­poth­e­sis, very sur­pris­ing. Even if there are some ver­sions of me that ex­ist in re­gions or uni­verses where they arose by chaotic chance, I an­ti­ci­pate, for pur­poses of pre­dict­ing my fu­ture ex­pe­riences, that most of my ex­is­tence is en­coded in re­gions and uni­verses where I am the product of or­dered pro­cesses.”

And I cur­rently know of no way to re­ply thusly, that does not make use of poorly defined con­cepts like “num­ber of real pro­cesses” or “amount of real pro­cesses”; and “peo­ple”, and “me”, and “an­ti­ci­pate” and “fu­ture ex­pe­rience”.

Of course con­fu­sion ex­ists in the mind, not in re­al­ity, and it would not be the least bit sur­pris­ing if a re­s­olu­tion of this prob­lem were to dis­pense with such no­tions as “real” and “peo­ple” and “my fu­ture”. But I do not presently have that re­s­olu­tion.

(To­mor­row I will ar­gue that an­thropic up­dates must be ille­gal and that the cor­rect an­swer to the origi­nal prob­lem must be “50%”.)

• What is your de­gree of sub­jec­tive cre­dence—your pos­te­rior prob­a­bil­ity—that the log­i­cal coin came up 1?

. . .

(To­mor­row I will ar­gue that an­thropic up­dates must be ille­gal and that the cor­rect an­swer to the origi­nal prob­lem must be “50%”.)

If the ques­tion was, “What odds should you bet at?”, it could be an­swered us­ing your val­ues. Sup­pose each copy of you has \$1000, and copies of you in a red room are offered a bet that costs \$1000 and pays \$1001 if the Nth bit of pi is 0. Which do you pre­fer:

• To re­fuse the bet?

• With 50% sub­jec­tive log­i­cal prob­a­bil­ity, the Nth bit of pi will be 0 and you will have \$1,000 per copy.

• With 50% sub­jec­tive log­i­cal prob­a­bil­ity, the Nth bit of pi will be 1 and you will have \$1,000 per copy.

• To take the bet?

• With 50% sub­jec­tive log­i­cal prob­a­bil­ity, the Nth bit of pi will be 0 and you will have \$ 1,000.999 999 999 per copy.

• With 50% sub­jec­tive log­i­cal prob­a­bil­ity, the Nth bit of pi will be 1 and you will have \$ 999.999 999 per copy.

But the ques­tion is “What is your pos­te­rior prob­a­bil­ity”? This is not a de­ci­sion prob­lem, so I don’t know that it has an an­swer.

I think it may be nat­u­ral to ask in­stead: “Given that your learned cog­ni­tive sys­tem of ra­tio­nal pre­dic­tion is com­pet­ing for in­fluence over an­ti­ci­pa­tions used in mak­ing de­ci­sions, in a brain which awards in­fluence over an­ti­ci­pa­tion to differ­ent cog­ni­tive sys­tems de­pend­ing on the suc­cess of their past re­ported pre­dic­tions, which prob­a­bil­ity should your ra­tio­nal pre­dic­tion sys­tem re­port to the brain’s an­ti­ci­pa­tion-in­fluence-award­ing mechanisms?”

Sup­pose you know the fol­low­ing:

• Your brain will use a sim­ple Bayesian mechanism which will treat cog­ni­tive sys­tems as hy­pothe­ses and award in­fluence us­ing Bayesian up­dat­ing.

• In the fu­ture, the com­peti­tor cog­ni­tive sys­tems to your ra­tio­nal pre­dic­tion sys­tem will make pre­dic­tions which will cause you to take harm­ful ac­tions. The less in­fluen­tial the com­peti­tor sys­tems will be, the less harm­ful the ac­tions will be.

• The com­peti­tor cog­ni­tive sys­tems will pre­dict 1:1 prob­a­bil­ities of the ex­pe­riences of be­ing in­formed that the Nth digit of pi is 0 or 1.

This ques­tion could be an­swered us­ing your val­ues. Which would you pre­fer:

• In both green rooms and red rooms, to ra­tio­nally pre­dict 1:1 prob­a­bil­ities of the ex­pe­riences of be­ing in­formed that the Nth bit of pi is 0 or 1?

• With 50% sub­jec­tive log­i­cal prob­a­bil­ity, the Nth bit of pi will be 0. There will be 1,000,000,001 copies of you whose learned cog­ni­tive sys­tems for ra­tio­nal pre­dic­tion took a like­li­hood hit of 12. The com­peti­tor cog­ni­tive sys­tems will also have taken a like­li­hood hit of 12. The rel­a­tive in­fluences of the cog­ni­tive sys­tems will not change.

• With 50% sub­jec­tive log­i­cal prob­a­bil­ity, the Nth bit of pi will be 1. There will be 1,000,000,001 copies of you whose learned cog­ni­tive sys­tems for ra­tio­nal pre­dic­tion took a like­li­hood hit of 12. The com­peti­tor cog­ni­tive sys­tems will also have taken a like­li­hood hit of 12. The rel­a­tive in­fluences of the cog­ni­tive sys­tems will not change.

• In red rooms, to ra­tio­nally pre­dict a 1,000,000,000:1 prob­a­bil­ity of the ex­pe­rience of be­ing in­formed that the Nth bit of pi is 0, and in green rooms, to ra­tio­nally pre­dict a 1,000,000,000:1 prob­a­bil­ity of the ex­pe­rience of be­ing in­formed that the Nth bit of pi is 1?

• With 50% sub­jec­tive log­i­cal prob­a­bil­ity, the Nth bit of pi will be 0. There will be 1,000,000,000 copies of you who woke up in red rooms, whose learned cog­ni­tive sys­tems for ra­tio­nal pre­dic­tion took a tiny 1,000,000,0001,000,000,001 like­li­hood hit. The com­peti­tor cog­ni­tive sys­tems will have taken a like­li­hood hit of 12. In those 1,000,000,000 copies, the rel­a­tive in­fluences of the cog­ni­tive sys­tems will be ad­justed by the ra­tio 2,000,000,000:1,000,000,001. There will also be one copy of you who woke up in a green room, whose learned cog­ni­tive sys­tems for ra­tio­nal pre­dic­tion took a like­li­hood hit of 11,000,000,001. In that copy, the rel­a­tive in­fluences of the cog­ni­tive sys­tems will be ad­justed by the ra­tio 2:1,000,000,001.

• With 50% sub­jec­tive log­i­cal prob­a­bil­ity, the Nth bit of pi will be 1. There will be one copy of you who woke up in a red room, whose learned cog­ni­tive sys­tems for ra­tio­nal pre­dic­tion took a like­li­hood hit of 11,000,000,001. The com­peti­tor cog­ni­tive sys­tems will have taken a like­li­hood hit of 12. In that copy, the rel­a­tive in­fluences of the cog­ni­tive sys­tems will be ad­justed by the ra­tio 2:1,000,000,001. There will also be 1,000,000,000 copies of you who woke up in a green room, whose learned cog­ni­tive sys­tems for ra­tio­nal pre­dic­tion took a tiny 1,000,000,0001,000,000,001 like­li­hood hit. In those 1,000,000,000 copies, the rel­a­tive in­fluences of the cog­ni­tive sys­tems will be ad­justed by the ra­tio 2,000,000,000:1,000,000,001.

The an­swer de­pends on the start­ing rel­a­tive in­fluences and on the de­tails of the func­tion from amounts of non-ra­tio­nal an­ti­ci­pa­tion to amounts of harm. But for per­spec­tive, the ra­tio 2:1,000,000,001 can be re­versed with 29.9 copies of the ra­tio 2,000,000,000:1,000,000,001.

If your copies are be­ing merged, the op­ti­mal “ra­tio­nal” pre­dic­tion would de­pend on the de­tails of the merg­ing al­gorithm. If the merg­ing al­gorithm took the ar­ith­metic mean of the up­dated in­fluences, the op­ti­mal pre­dic­tion would still de­pend on the start­ing rel­a­tive in­fluences and the harm from non-ra­tio­nal an­ti­ci­pa­tions. But if the merg­ing al­gorithm mul­ti­plica­tively com­bined the like­li­hood ra­tios from ev­ery copy’s pre­dic­tions, then the sec­ond pre­dic­tion rule would be op­ti­mal.

To make de­ci­sions about how to value pos­si­bly log­i­cally im­pos­si­ble wor­lds, it may help to imag­ine that the de­ci­sion prob­lem will be iter­ated with the (N+1)th digit of pi, the (N+2)th bit, …

(If the ra­tio­nal pre­dic­tion sys­tem already has com­plete con­trol of your brain’s an­ti­ci­pa­tions, then there may be no rea­son to pre­dict any­thing that does not af­fect a de­ci­sion.)

• I agree with Steve; we have to take a step back and ask not for prob­a­bil­ities but for de­ci­sion al­gorithms that aim to achieve cer­tain goals, then it all makes sense; it has to—based upon ma­te­ri­al­ism, what­ever defi­ni­tion of “you” you try to set­tle upon, “you” is some set of phys­i­cal ob­jects that be­have ac­cord­ing to a cer­tain de­ci­sion al­gorithm, and given the de­ci­sion al­gorithm, “you” will have a well-defined ex­pected fu­ture re­ward.

• 9 Sep 2009 6:20 UTC
−2 points
Parent

Let me sug­gest that for an­thropic rea­son­ing, you are not di­rectly calcu­lat­ing ex­pected util­ity but ac­tu­ally try­ing to de­ter­mine pri­ors in­stead. And this traces back to Oc­cam’s ra­zor and hence com­plex­ity mea­sures (com­plex­ity prior). Fur­ther, it is not prob­a­bil­ities that you are try­ing to di­rectly ma­nipu­late, but de­grees of similar­ity. (i.e which refer­ence class does a given ob­server fall into? – what is the de­gree of similar­ity be­tween given al­gorithms?). So rather than util­ity and prob­a­bil­ity, you are ac­tu­ally try­ing to ma­nipu­late some­thing more ba­sic , i.e., com­plex­ity and similar­ity measures

Suggested anal­ogy:

Com­plex­ity (is like) Utility Similar­ity (is like ) Probability

Let me sug­gest that rather than try­ing to ‘max­i­mize util­ity’ di­rectly, you should first at­tempt to ‘min­i­mize com­plex­ity’ us­ing a new gen­er­al­ized new form of ra­tio­nal­ity based on the above anal­ogy (The pu­ta­tive method would be an en­tirely new type of ra­tio­nal­ity which sub­sumes or­di­nary Bayesian rea­son­ing as a spe­cial case). The ‘ex­pected com­plex­ity’ (analo­gous to ‘ex­pected util­ity’) would be based on a ‘com­plex­ity func­tion’ (analo­gous to ‘util­ity func­tion’) that com­bines similar­ity mea­sures (si­mil­iar­i­ties be­tween al­gorithms) with the com­plex­ities of given out­comes. The util­ities and prob­a­bil­ities would be de­rived from these calcu­la­tions (or­di­nary Bayesian ra­tio­nal­ity would be deriva­tive rather than fun­da­men­tal).

• M J Ged­des (Black Swan Siren!)

• Ne­cro­mancy, but: easy. Boltz­mann brains obey lit­tle or no causal­ity, and thus can­not pos­si­bly benefit from ra­tio­nal­ity. As such, ra­tio­nal­ity is wasted on them. Op­ti­mize for the sig­nal, not for the noise.

• I would have an­swered 1B:1 (look­ing for­ward to the sec­ond post to be proved wrong), how­ever I think a ra­tio­nal agent should never be­lieve in the Boltz­mann brain sce­nario re­gard­less.

Not be­cause it is not a rea­son­able hy­poth­e­sis, but since it negates the agent’s ca­pa­bil­ities of es­ti­mat­ing prior prob­a­bil­ities (it can­not trust even a pre­de­ter­mined por­tion of its mem­o­ries) plus it also makes op­ti­miz­ing out­comes a fu­tile un­der­tak­ing.

There­fore, I’d gen­er­ally say that an agent has to as­sume an ob­jec­tive, causal re­al­ity as a pre­con­di­tion of us­ing de­ci­sion the­ory at all.

• The skele­ton of the ar­gu­ment is:

1. Pre­sent a par­tic­u­lar thought ex­per­i­ment, in­tended to pro­voke an­thropic rea­son­ing. There are two mod­er­ately plau­si­ble an­swers, “50%” and “a billion to one against”.

2. As­sume for the sake of ar­gu­ment, the an­swer to the thought ex­per­i­ment is 50%. Note that the “50%” an­swer cor­re­sponds to ig­nor­ing the color of the room—“not up­dat­ing on it” in the Bayesian jar­gon.

3. The thought ex­per­i­ment is analo­gous to the Bolz­mann-brain hy­poth­e­sis. In par­tic­u­lar, the color of the room cor­re­sponds to the or­dered-ness of our ex­pe­riences.

4. With the ex­cep­tion of the or­dered-ness of our ex­pe­riences, a stochas­tic-all-ex­pe­rience-gen­er­a­tor would be con­sis­tent with all ob­ser­va­tions.

5. Oc­cam’s Ra­zor: Use the sim­plest pos­si­ble hy­poth­e­sis con­sis­tent with ob­ser­va­tions.

6. A stochas­tic-all-ex­pe­rience-gen­er­a­tor would be a sim­ple hy­poth­e­sis.

7. From 3, 4, 5, and 6, pre­dict that the uni­verse is a stochas­tic all-ex­pe­rience gen­er­a­tor.

8. From 7, some very un­pleas­ant con­se­quences.

9. From 8, re­ject the as­sump­tion.

I think the ar­gu­ment can be im­proved.

Ac­cord­ing to the min­i­mum de­scrip­tion length no­tion of sci­ence, we have a model and a se­quence of ob­ser­va­tions. A “bet­ter” model is one that is short and com­presses the ob­ser­va­tions well. The stochas­tic-all-ex­pe­rience-gen­er­a­tor is a short model, but it doesn’t com­press our ob­ser­va­tions. I think this is ba­si­cally say­ing that ac­cord­ing to the MDL ver­sion of Oc­cam’s Ra­zor, 6 is false.

The ar­ti­cle claims that the stochas­tic-all-ex­pe­rience-gen­er­a­tor is a “sim­ple” model of the world and would defeat more com­mon-sense mod­els of the world in an Oc­cam’s Ra­zor-off in the ab­sence of some sort of an­thropic defense. That claim (6) might be true, but it needs more sup­port.

• Isn’t the ar­gu­ment in one false? If one ap­plies bayes’ the­o­rem, with ini­tial prob. 50% and new like­li­hood ra­tio of a billion to one, don’t you get 500000000 to one chances?

• I think you may be sincerely con­fused. Would you please re­word your ques­tion?

If your ques­tion is whether some­one (ei­ther me or the OP) has com­mit­ted a mul­ti­pli­ca­tion er­ror—yes, it’s en­tirely pos­si­ble, but mul­ti­pli­ca­tion is not the point—the point is an­thropic rea­son­ing and whether “I am a Bolz­mann brain” is a sim­ple hy­poth­e­sis.

• I Agree very much.

It re­minds me of one re­mark of Eliezer in his di­avlog with Scott about the mul­ti­ple world in­ter­pre­ta­tion of QM. There he also said some­thing to the effect that Oc­cam’s ra­zor is only about the the­ory, but not about the “amount of stuff”.

I think that was the same fal­lacy. When Us­ing MDL, you have to give a short de­scrip­tion for your ac­tual ob­ser­va­tion his­tory, or at least give an up­per bound for the com­pressed length. In mul­ti­ple world the­o­ries these bounds can be­come very non­triv­ial, and the ob­ser­va­tions can eas­ily dom­i­nate the de­scrip­tion length, there­fore Oc­cam’s ra­zor can­not be ap­plied with­out thor­ough quan­ti­ta­tive anal­y­sis.

Of course, in that spe­cial con­text it was true that a ran­dom state-re­duc­tion is not bet­ter than a mul­ti­ple world hy­poth­e­sis, in fact: slightly worse. How­ever, one should add, a de­ter­minis­tic (low com­plex­ity) state re­duc­tion would be far su­pe­rior.

Re­gard­less: such light­hearted re­marks about the “amount of stuff” in Oc­cam’s ra­zor are mis­lead­ing at least.

• “That claim (6) might be true, but it needs more sup­port.” Agreed.

• It seems to me that “I’m a Bolz­mann brain” is ex­actly the same sort of use­less hy­poth­e­sis as “Every­thing I think I ex­pe­rience is a hal­lu­ci­na­tion man­u­fac­tured by an om­nipo­tent evil ge­nie”. They’re both non-falsifi­able by defi­ni­tion, un­sup­ported by any ev­i­dence, and have no effect on one’s de­ci­sions in any event. So I say: show me some ev­i­dence, and I’ll worry about it. Other­wise it isn’t even worth think­ing about.

• [Rosen­crantz has been flip­ping coins, and all of them are com­ing down heads]

Guil­den­stern: Con­sider: One, prob­a­bil­ity is a fac­tor which op­er­ates within nat­u­ral forces. Two, prob­a­bil­ity is not op­er­at­ing as a fac­tor. Three, we are now held within un-, sub- or su­per-nat­u­ral forces. Dis­cuss.

Rosen­crantz: What?

Rosen­crantz & Guil­den­stern Are Dead, Tom Stoppard

• The Boltz­mann brain ar­gu­ment was the rea­son why I had not adopted some­thing along the lines of UDT, de­spite hav­ing con­sid­ered it and dis­cussed it a bit with oth­ers, be­fore the re­cent LW dis­cus­sion. In­stead, I had tagged it as ‘needs more anal­y­sis later.’ After the fact, that looks like flinch­ing to me.

• 8 Sep 2009 15:47 UTC
3 points

Sup­pose Omega plays the fol­low­ing game (the “Prob­a­bil­ity Game”) with me: You will tell me a num­ber X rep­re­sent­ing the prob­a­bil­ity of A. If A turns out to be true, I will in­crease your util­ity by ln(X); oth­er­wise, I will in­crease your util­ity by ln(1-X). It’s well-known that the way one max­i­mizes one’s ex­pected util­ity is by giv­ing their ac­tual ex­pected prob­a­bil­ity of X.

Pre­sum­ably, de­ci­sion mechanisms should be con­sis­tent un­der re­flec­tion. Even if not, if I some­how know that Omega’s go­ing to split me into 1,000,000,001 copies and do this, I want to mod­ify my de­ci­sion mechanism to do what I think is best.

Sup­pose I care about the en­tire group of 1,000,000,000 me’s who go into one color of room pre­cisely as much as I care about the sin­gle me who goes into the other color. (Per­haps I’m ex­tend­ing the idea that two copies of one per­son should not be more de­serv­ing than a sin­gle copy of the per­son.) In or­der to max­i­mize the av­er­age util­ity here, I should have ev­ery­one de­clare a 50% prob­a­bil­ity of the best an­swer, re­sult­ing in an av­er­age util­ity of about −0.69. If I had ev­ery­one de­clare a 1,000,000,000-in-1,000,000,001 prob­a­bil­ity, the av­er­age util­ity would be about −10.

Sup­pose, on the other hand, that I care about each in­di­vi­d­ual per­son equally. If I had ev­ery­one de­clare a 50% prob­a­bil­ity, the av­er­age util­ity would still be −0.69, but if I had ev­ery­one de­clare a 1,000,000,000-in-1,000,000,001 prob­a­bil­ity, the av­er­age util­ity would go all the way up to −0.000000022.

One’s an­swer to the Prob­a­bil­ity Game is one’s prob­a­bil­ity es­ti­mate. The con­sis­tent-un­der-re­flec­tion an­swer to the Prob­a­bil­ity Game de­pends on one’s val­ues. There­fore, one’s prob­a­bil­ity es­ti­mate de­pends on one’s val­ues. It’s coun­ter­in­tu­itive, but I don’t think I can ar­gue against it.

Now, here’s, per­haps, a re­fu­ta­tion. Sup­pose I know that some time in the fu­ture, I’m go­ing to be turned into my evil twin, Dr. Dingo, and Omega is go­ing to play the Prob­a­bil­ity Game with me on the state­ment “The sky is blue”. I hate my evil twin so much that I con­sider my util­ity to have his util­ity sub­tracted from it. There­fore, I mod­ify my­self to say that the prob­a­bil­ity that the sky is blue is 0, thereby re­sult­ing a util­ity for him of nega­tive in­finity, and a util­ity for me of in­finity. Through the same mechanism—us­ing an in­ter­pre­ta­tion func­tion to de­ter­mine my util­ity given the util­ities of fu­ture copies of me—I ap­par­ently make the prob­a­bil­ity that the sky is blue be 0. This doesn’t seem right.

Per­haps we could re­quire that in­ter­pre­ta­tion func­tions be mono­ton­i­cally re­lated to the util­ities they’re in­ter­pret­ing, so that an in­crease in a fu­ture me’s util­ity can’t de­crease my cur­rent me’s util­ity. I don’t know if that would work.

• But for to­day, sup­pose you re­ply “50%”. Think­ing, per­haps: “I don’t un­der­stand this whole con­scious­ness riga­ma­role, I wouldn’t try to pro­gram a com­puter to up­date on it, and I’m not go­ing to up­date on it my­self.”

In that case, why don’t you be­lieve you’re a Boltz­mann brain?

This sounds back­wards (side­ways?); the rea­son to (strongly) be­lieve one is a Boltz­mann brain is that there are very many of them in some weight­ing com­pared to the “nor­mal” you, which cor­re­sponds to ac­cept­ing prob­a­bil­ity of 1 to the billion in this thought ex­per­i­ment. If you don’t up­date, then the other billion peo­ple are (epistem­i­cally) ir­rele­vant, and in ex­actly the same way so are Boltz­mann brains. It doesn’t at all mat­ter how many vi­sual cor­texes spon­ta­neously form in the Chaos.

In other words, there are two parts to not up­dat­ing: you can’t place a greater weight on par­tic­u­lar states of the world, ar­gu­ing that this par­tic­u­lar kind of situ­a­tions is priv­ileged, but at the same time you can’t be dis­turbed by an ar­gu­ment that there is huge weight on that other class of crazy situ­a­tions which leave your priv­ileged situ­a­tion far be­hind. You can’t re­fute the as­ser­tion that you are a Boltz­mann brain, but you are undis­turbed by the as­ser­tion that there are Boltz­mann brains.

Of course, in all cases some situ­a­tions may be prefer­en­tially priv­ileged. You don’t care about what hap­pens to a Boltz­mann brain, or more likely just can’t do much for it any­way. In the rooms with a billion copies, you may care about whether only one per­son makes a mis­take, or a whole billion of them (to­tal util­i­tar­i­anism). But that’s util­ity of the situ­a­tion, not prob­a­bil­ity, and the con­struc­tion of the thought ex­per­i­ment clearly doesn’t try to make util­ity sym­met­ri­cal, hence the skewed in­tu­ition.

The con­fu­sion be­tween prob­a­bil­ity and util­ity seems to ex­plain the in­tu­ition: weight­ing is there, just not in the prob­a­bil­ity, and in fact it can’t be rep­re­sented as prob­a­bil­ity (in which case the weight­ing is not so much in util­ity, since there is no an­thropic util­ity just as there is no an­thropic prob­a­bil­ity, but in how the global prefer­ence re­sponds to ac­tions performed in par­tic­u­lar situ­a­tions).

• The prob­lem is that if you don’t up­date on the pro­por­tions of sen­tients who have your par­tic­u­lar ex­pe­rience, then there are much sim­pler hy­pothe­ses than our cur­rent phys­i­cal model which would gen­er­ate and “ex­plain” your ex­pe­riences, namely, “Every ex­pe­rience hap­pens within the dust.”

To put it an­other way, the dust hy­poth­e­sis is ex­tremely sim­ple and ex­plains why this ex­pe­rience ex­ists. It just doesn’t ex­plain why an or­dered ex­pe­rience in­stead of a di­s­or­dered one, when or­dered ex­pe­riences are such a tiny frac­tion of all ex­pe­riences. If you think the lat­ter is a non-con­sid­er­a­tion then you should just go with the sim­plest ex­pla­na­tion.

• Tra­di­tional ex­pla­na­tions are for up­dat­ing; this is prob­a­bly a rele­vant ten­sion. If you don’t up­date, you can’t ex­plain in the sense of up­dat­ing. The no­tion of ex­pla­na­tion it­self has to be re­vised in this light.

• Are the Boltz­mann brain hy­poth­e­sis and the dust hy­poth­e­sis re­ally sim­pler than the stan­dard model of the uni­verse, in the sense of Oc­cam’s ra­zor? It seems to me that it isn’t.

I’m think­ing speci­fi­cally about Solomonoff in­duc­tion here. A Boltz­mann brain hy­poth­e­sis would be a pro­gram that cor­rectly pre­dicts all my ex­pe­riences up to now, and then starts pre­dict­ing un­re­lated ex­pe­riences. Such a pro­gram of min­i­mal length would es­sen­tially em­u­late the stan­dard model un­til out­put N, and then start do­ing some­thing else. So it would be longer than the stan­dard model by how­ever many bits it takes to en­code the num­ber N.

• (Miss­ing word alert in para­graph 11: “Even [if] a blob of chaos coughs up a vi­sual cor­tex (or equiv­a­lent)...”.)

• thx fixed

• In the crit­i­cisim of Boltz­man, en­tropy sounds like a ra­dio dial that some­one is tweak­ing rather than a prop­erty of some space. I may be mi­s­un­der­stand­ing some­thing.

Ba­si­cally, if some tiny part of some enor­mous uni­verse hap­pened to con­dense into a very low-en­tropy state, that does not mean that it could spon­ta­neously jump to a high-en­tropy state. It would, with ex­tremely high prob­a­bil­ity, slowly re­turn to a high-en­tropy state. It thus seems like we could see what we ac­tu­ally see and not be at risk of spon­ta­neously turn­ing into static. Our cur­rent ob­serv­able uni­verse has a cer­tain amount of en­tropy and had a cer­tain amount be­fore the cur­rent time. If we were in some differ­ent bub­ble, the uni­verse would pre­sum­ably look quite differ­ent, and prob­a­bly only cer­tain bub­bles could gen­er­ate con­scious ob­servers, and those bub­bles would not be at risk of spon­ta­neously max­i­miz­ing en­tropy.

The ar­gu­ment as ap­plied to con­scious­ness makes perfect sense, but at the very least I seem to be miss­ing some­thing about the uni­verse ar­gu­ment.

• It thus seems like we could see what we ac­tu­ally see and not be at risk of spon­ta­neously turn­ing into static. Our cur­rent ob­serv­able uni­verse has a cer­tain amount of en­tropy and had a cer­tain amount be­fore the cur­rent time.

If the low-en­tropy area of the uni­verse was origi­nally a spon­ta­neous fluc­tu­a­tion in a big­ger max-en­tropy uni­verse, than that is vastly im­prob­a­ble.

Such a fluc­tu­a­tion is ex­po­nen­tially more likely for (lin­early) smaller vol­umes of the uni­verse. So the par­si­mo­nious ex­pla­na­tion for what we see, on this the­ory, is that the part of the uni­verse that has low en­tropy is the small­est which is still enough to gen­er­ate our ac­tual ex­pe­rience.

How small is “small­est”? Well, to be­gin with, it’s not large enough to in­clude stars out­side the So­lar Sys­tem; it’s vastly more likely that the light en route from those stars to Earth was spon­ta­neously cre­ated, than that the stars them­selves and all the empty space be­tween (very low en­tropy!) were cre­ated mil­lions of years ear­lier. So the par­si­mo­nious ex­pla­na­tion is that any mo­ment now, that light cre­ated en route is go­ing to run out and we’ll start see­ing static (or at least dark­ness) in the night sky.

Similarly: we have a long his­tor­i­cal record in ge­ol­ogy, ar­chae­ol­ogy, even writ­ten his­tory. Did it all re­ally hap­pen? The par­si­mo­nious ex­pla­na­tion says that it’s vastly more likely that an Earth with fos­sils was spon­ta­neously cre­ated, than that an Earth with dinosaurs was cre­ated, who then be­came fos­sils. This is be­cause the past light cone of, say, a billion-year-old Earth is much big­ger than the past light cone of a 6000 year old earth. And so re­quires the spon­ta­neous cre­ation of a vastly big­ger sec­tion of uni­verse.

Fi­nally, it’s vastly more likely that you were spon­ta­neously cre­ated a sec­ond ago com­plete with all your mem­o­ries, than that you re­ally lived through what you re­mem­ber. And it’s vastly more likely that the whole spon­ta­neous cre­ation was only a few light-sec­onds across, and not as big as it seems. In which case it’ll stop ex­ist­ing any mo­ment now.

That’s the ex­pe­rience of a Boltz­mann Brain.

• I agree. The idea that low-en­tropy pock­ets that form are to­tally im­mune to a sim­plic­ity prior seems un­jus­tified to me. The uni­verse may be in a high-en­tropy state, but it’s still got phys­i­cal laws to fol­low! It’s not just do­ing things to­tally at ran­dom; that’s merely a con­ve­nient ap­prox­i­ma­tion. Maybe I am ig­no­rant here, but it seems like the prob­a­bil­ity of a par­tic­u­lar low-en­tropy bub­ble will be based on more than just its size.

• (To­mor­row I will ar­gue that an­thropic up­dates must be ille­gal and that the cor­rect an­swer to the origi­nal prob­lem must be “50%”.)

Is your in­tent here to ar­gue both sides of the is­sue to help, well, lay out the is­sues, or is it your ac­tual cur­rent po­si­tion that an­thropic up­dates re­ally re­ally are ver­bot­ten and that 50% is the re­ally re­ally cor­rect an­swer?

• It’s my in­tent here to lay out my own con­fu­sion.

• It’s not en­i­trely clear what does t mean to cre­ate a num­ber of “me”: my con­sciu­ous­ness is only one and can­not be more than one and I only can feel sen­sa­tions from one sigle body. If the idea is just to gen­er­ate a cer­tain num­ber of phys­i­cal copies of my body and em­bed my pre­sent con­scious­ness into one of them at ran­dom then the prob­lem is at least clear and de­ter­mined from a math­e­mat­i­cal point of view: it seems to be a sim­ple prob­a­bil­ity prob­lem about con­di­tional prob­a­bil­ity. You are ask­ing what is the prob­a­bil­ity that an event hap­pened in the past given the con­di­tion of some a pri­ori pos­si­ble con­se­quence, it can be eas­ily solved by Bayes’ for­mula and the prob­a­bil­ity is about one over 1 billion.

• 9 Nov 2009 22:52 UTC
1 point

Here, let me re-re­spond to this post.

So if you’re not up­dat­ing on the ap­par­ent con­di­tional rar­ity of hav­ing a highly or­dered ex­pe­rience of grav­ity, then you should just be­lieve the very sim­ple hy­poth­e­sis of a high-vol­ume ran­dom ex­pe­rience gen­er­a­tor, which would nec­es­sar­ily cre­ate your cur­rent ex­pe­riences—albeit with ex­treme rel­a­tive in­fre­quency, but you don’t care about that.

“A high-vol­ume ran­dom ex­pe­rience gen­er­a­tor” is not a hy­poth­e­sis. It’s a thing. “The uni­verse is a high-vol­ume ran­dom ex­pe­rience gen­er­a­tor” is bet­ter, but still not okay for Bayesian up­dat­ing, be­cause we don’t ob­serve “the uni­verse”. “My ob­ser­va­tions are out­put by a high-vol­ume ran­dom ex­pe­rience gen­er­a­tor” is bet­ter still, but it doesn’t spec­ify which out­put our ob­ser­va­tions are. “My ob­ser­va­tions are the out­put at [...] by a high-vol­ume ran­dom ex­pe­rience gen­er­a­tor” is a spe­cific, up­dat­able hy­poth­e­sis—and its en­tropy is so high that it’s not worth con­sid­er­ing.

Did I just use an­thropic rea­son­ing?

Let’s ap­ply this to the ho­tel prob­lem. There are two spe­cific hy­pothe­ses: “My ob­ser­va­tions are what they were be­fore ex­cept I’m now in green room #314159265″ (or what­ever green room) and ”. . . ex­cept I’m now in the red room”. It ap­pears that the thing de­ter­min­ing prob­a­bil­ity is not mul­ti­plic­ity but com­plex­ity of the “ad­dress”—and, coun­ter­in­tu­itively, this makes the type of room only one of you is in more likely than the type of room a billion of you are in.

Yes, I’m tak­ing into ac­count that “I’m in a green room” is the dis­junc­tion of one billion hy­pothe­ses and there­fore has one billion times the prob­a­bil­ity of any of them. In or­der for one’s pri­ors to be well-defined, then for in­finitely many N, all hy­pothe­ses of length N+1 to­gether must be less likely than all hy­pothe­ses of length N to­gether.

Edit: changed “more likely” to “less likely” (oops) and “large N” to “in­finitely many N”, as per peng­vado. Thanks!

This post in sev­en­teen words: it’s the high mul­ti­plic­ity of brains in the Boltz­mann brain hy­poth­e­sis, not their low fre­quency, that mat­ters.

Let the pok­ing of holes into this post be­gin!

• “My ob­ser­va­tions are the out­put at [...] by a high-vol­ume ran­dom ex­pe­rience gen­er­a­tor”

“My ob­ser­va­tions are [...], which were out­put by a high-vol­ume ran­dom ex­pe­rience gen­er­a­tor”. Since the task is to ex­plain my ob­ser­va­tions, not to pre­dict where I am. This way also makes it more clear that that suffix is strictly su­perflu­ous from a Kol­mogorov per­spec­tive.

In or­der for one’s pri­ors to be well-defined, then for large N, all hy­pothe­ses of length N+1 to­gether must be more likely than all hy­pothe­ses of length N to­gether.

You mean less likely. i.e. there is no non­nega­tive mono­tonic-in­creas­ing in­finite se­ries whose sum is finite. Also, it need not hap­pen for all large N, just some of them. So I would clar­ify it as: ∀L ∃N>L ∀M>N (((sum of prob­a­bil­ities of hy­pothe­ses of length M) < (sum of prob­a­bil­ities of hy­pothe­ses of length N)) or (both are zero)).

But you shouldn’t take that into ac­count for your ex­am­ple. The the­o­rem ap­plies to in­finite se­quences of hy­pothe­ses, but not to any one finite hy­poth­e­sis such as the dis­junc­tion of a billion green rooms. To get con­clu­sions about a par­tic­u­lar hy­poth­e­sis, you need more than “any prior is Oc­cam’s ra­zor with re­spect to a suffi­ciently per­verse com­plex­ity met­ric”.

• “My ob­ser­va­tions are [...], which were out­put by a high-vol­ume ran­dom ex­pe­rience gen­er­a­tor”. Since the task is to ex­plain my ob­ser­va­tions, not to pre­dict where I am. This way also makes it more clear that that suffix is strictly su­perflu­ous from a Kol­mogorov per­spec­tive.

You are cor­rect, though I be­lieve your state­ment is equiv­a­lent to mine.

You mean less likely. i.e. there is no non­nega­tive mono­tonic-in­creas­ing in­finite se­ries whose sum is finite. Also, it need not hap­pen for all large N, just some of them. So I would clar­ify it as: ∀L ∃N>L ∀M>N (((sum of prob­a­bil­ities of hy­pothe­ses of length M) < (sum of prob­a­bil­ities of hy­pothe­ses of length N)) or (both are zero)).

Right again; I’ll fix my post.

• I think we need to re­duce “sur­prise” and “ex­pla­na­tion” first. I sug­gest they have to do with bounded ra­tio­nal­ity and log­i­cal un­cer­tainty. Th­ese con­cepts don’t seem to ex­ist in de­ci­sion the­o­ries with log­i­cal om­ni­science.

Sur­prise seems to be the out­put of some heuris­tic that tell you when you may have made a cog­ni­tive er­ror or taken a com­pu­ta­tional short­cut that turns out to be wrong (i.e., you find your­self in a situ­a­tion where you had pre­vi­ously com­puted to have low prob­a­bil­ity) and should go back and recheck your logic. After you’ve found such an er­ror and have fixed it, per­haps you call the fix an ex­pla­na­tion (i.e., it “ex­plains” why the low com­puted prob­a­bil­ity was an er­ror).

In UDT, there ought to be equiv­a­lents of sur­prise and ex­pla­na­tion, al­though I’m too tired to think of them right now. I’ll try again later.

• In that case, why don’t you be­lieve you’re a Boltz­mann brain?

I think a por­tion of the con­fu­sion comes from im­plicit as­sump­tions about what con­sti­tutes “you”, and an im­plicit se­man­tics for how to ma­nipu­late the con­cept. Sup­pose that there are N (N large) in­stances of “you” pro­cesses that run on Boltz­mann Brains, and M (M << N) that run in sen­si­ble copies of the world around me. Which one of them is “you”? If “you” is a par­tic­u­lar one of the N that run on Boltz­mann Brains, then which one is “you, 10 sec­onds from now”?

It seems like it ought to be pos­si­ble to ex­pe­rience a short stream of ran­dom sen­sa­tions; thus in a “Boltz­mann Brains Dom­i­nate” mul­ti­verse, I ought to ex­pect, a pri­ori, that my ex­pe­riences will be ran­dom­ness, if I con­sider my­self to be ran­domly sam­pled ac­cord­ing to the uniform dis­tri­bu­tion on can­di­date me’s.

Up­dat­ing on the fact that my ex­pe­riences this in­stant are not ran­dom noise, if the “Boltz­mann Brains Dom­i­nate” mul­ti­verse is the only hy­poth­e­sis, I ought to still be­lieve that I am a Boltz­mann Brain with very high prob­a­bil­ity.

But the only copies of “me” that will have a “fu­ture” that in­ter­acts mean­ingfully with the de­ci­sions I make now are those copies of me that live in the sen­si­ble uni­verse, or at least a vaguely sen­si­ble uni­verse, where ” vaguely sen­si­ble” means “acts ac­cord­ing to the usual rules of causal­ity for at least long enough for me to get ex­pe­rience back that de­pends non-triv­ially upon what de­ci­sion I took.

So my solu­tion would be to ad­mit that (a) we are not sure ex­actly what we mean when we use wor­lds like “me” in a uni­verse/​mul­ti­verse with lots of copies of the phys­i­cal cor­re­lates of “me”, and (b) that our val­ues dic­tate that even if we con­clude with high prob­a­bil­ity that we are Boltz­mann Brains, we ought to con­di­tion on the nega­tion of that, be­cause ac­tions out­putted to a ran­dom en­vi­ron­ment are pointless.

• ISTM the prob­lem of Boltz­mann brains is ir­rele­vant to the 50%-ers. Pre­sum­ably, the 50%-ers are ra­tio­nal—e.g., will­ing to up­date on statis­ti­cal stud­ies sig­nifi­cant at p=0.05. So they don’t ob­ject to the statis­tics of the situ­a­tion; they’re ob­ject­ing to the con­cept of “cre­at­ing a billion of you”, such that you don’t know which one you are. If you had offered to roll a billion-sided die to de­ter­mine their fate (check your lo­cal table­top-gam­ing store), there would be no dis­agree­ment.

Of course, this prob­lem of iden­tity and con­ti­nu­ity has been hashed out on OB/​LW be­fore. But the Boltz­mann-brain hy­poth­e­sis doesn’t re­quire more than one of you—just a lot of other peo­ple, some­thing the 50%-ers have no philo­soph­i­cal prob­lem with. It’s a challenge for a solip­sist, not a 50%-er.

• “Why did the uni­verse seem to start from a con­di­tion of low en­tropy?”

I’m con­fused here. If we don’t go with a big uni­verse and in­stead just say that our ob­serv­able uni­verse is the whole thing, then trac­ing back time we find that it be­gan with a very small vol­ume. While it’s true that such a sys­tem wold nec­es­sar­ily have low en­tropy, that’s largely be­cause small vol­ume = not many differ­ent places to put things.

Alter­na­tive hy­poth­e­sis: The uni­verse be­gan in a state of max­i­mal en­tropy. This max­i­mum value was “low” com­pared to pre­sent day be­cause the early uni­verse was small. As the uni­verse ex­pands, its max­i­mum en­tropy grows. Its re­al­ized en­tropy also grows, just not as fast as its max­i­mal en­tropy.

• BBs can’t make cor­rect judge­ment about their re­al­ity. Their judge­ments are ran­dom. So 50 per cent BBs think that they are in non-ran­dom re­al­ity even if they are in ran­dom. So your ex­pe­rience doesn’t provide any in­for­ma­tion if you are BB or not. Only prior mat­ters, and the prior is high.

• Their judge­ments are ran­dom. So 50 per cent BBs think that they are in non-ran­dom re­al­ity even if they are in ran­dom.

The quoted figure does not fol­low. Ran­dom, yes; but it’s not a coin­flip. Given that a Boltz­mann Brain can ran­domly ap­pear with any set of mem­o­ries, and given that the po­ten­tial set of ran­dom uni­verses is vastly larger than the po­ten­tial set of non-ran­dom uni­verses, I’d imag­ine that the odds of a ran­domly-se­lected Boltz­mann Brain think­ing it is in a non-ran­dom uni­verse are pretty low...

• It will be true if BB would have time to think about their ex­pe­riences and abil­ity to come to log­i­cal con­clu­sions. But BBs opinions are also ran­dom.

• Hmmm. If the Boltz­mann Brain has no time to think and up­date its own opinions from its own mem­ory, then it is over­whelm­ingly likely that it has no opinion one way or an­other about whether or not it is in a ran­dom uni­verse. In fact, it is over­whelm­ingly likely that it does not even un­der­stand the ques­tion, be­cause its mindspace does not in­clude the con­cepts of both “ran­dom” and “uni­verse”...

• Of course most BBs don’t not think about whether are they ran­dom or not. But from sub­set of BBs who have thoughts about it (we cant say they are think­ing as it is longer pro­cess), its thoughts are ran­dom, and 50 per cent thinks that they are not ran­dom. So ex­pe­rience up­dat­ing of BB prob­a­bil­ities is not strong, but I am still not afraid to be BB by two other rea­sons.

1. Any BB is a copy of a real ob­server, and so I am real. (de­pends of iden­tity solv­ing)

2. BBs and real ob­servers are not dom­i­nat­ing class of ob­servers. There is a third class, that is Bolz­mann su­per­com­put­ers which simu­late our re­al­ity. They a medium size fluc­tu­a­tion which are very effec­tive in cre­ation trillions of ob­servers mo­ments which are rather con­sis­tent. But small amount of ran­dom­ness also ex­ist in such simu­lated uni­verses ( it could be ex­per­i­men­tally found). Hope to elab­o­rate the idea in long post soon.

• Found the similar idea in re­cent ar­ti­cle about Boltz­mann Brains:

“What we can do, how­ever, is rec­og­nize that it’s no way to go through life. The data that an ob­server just like us has ac­cess to in­cludes not only our phys­i­cal en­vi­ron­ment, but all of the (pur­ported) mem­o­ries and knowl­edge in our brains. In a ran­domly-fluc­tu­at­ing sce­nario, there’s no rea­son for this “knowl­edge” to have any cor­re­la­tion what­so­ever with the world out­side our im­me­di­ate sen­sory reach. In par­tic­u­lar, it’s over­whelm­ingly likely that ev­ery­thing we think we know about the laws of physics, and the cos­molog­i­cal model we have con­structed that pre­dicts we are likely to be ran­dom fluc­tu­a­tions, has ran­domly fluc­tu­ated into our heads. There is cer­tainly no rea­son to trust that our knowl­edge is ac­cu­rate, or that we have cor­rectly de­duced the pre­dic­tions of this cos­molog­i­cal model.” https://​​arxiv.org/​​pdf/​​1702.00850.pdf

• As I said be­fore about skep­ti­cal sce­nar­ios: you can­not re­fute them by ar­gu­ment, by defi­ni­tion, be­cause the per­son ar­gu­ing for the skep­ti­cal sce­nario will say, “since you are in this skep­ti­cal sce­nario, your ar­gu­ment is wrong no mat­ter how con­vinc­ing it seems to you.”

But we do not be­lieve those sce­nar­ios, and that in­cludes the Boltz­mann Brain the­ory, be­cause they are not use­ful for any pur­pose. In other words, if you are a Boltz­mann Brain, you have no idea what would be good to do, and in fact ac­cord­ing to the the­ory you can­not do any­thing be­cause you will not ex­ist one sec­ond from now.

• I don’t think that’s de­scrip­tively true at all. Re­gard­less of whether or not I see a use­ful way to ad­dress it, I still wouldn’t ex­pect to dis­solve mo­men­tar­ily with no warn­ing.

Now, this may be be­cause hu­mans can’t eas­ily be­lieve in novel claims. But “my” ex­pe­rience cer­tainly seems more co­her­ent than one would ex­pect a BB’s to seem, and this calls out for ex­pla­na­tion.

• A Boltz­mann brain has no way to know any­thing, rea­son to any con­clu­sion, or what­ever. So it has no way to know whether its ex­pe­rience should seem co­her­ent or not. So your claim that this needs ex­pla­na­tion is an un­jus­tified as­sump­tion, if you are a Boltz­mann brain.

• One man’s modus po­nens is an­other man’s modus tol­lens. I don’t even be­lieve that you be­lieve the con­clu­sion.

• Which con­clu­sion? I be­lieve that a Boltz­mann brain can­not val­idly be­lieve or rea­son about any­thing, and I cer­tainly be­lieve that I am not a Boltz­mann brain.

More im­por­tantly, I be­lieve ev­ery­thing I said there.

• Seems like you’re us­ing a con­fus­ing defi­ni­tion of “be­lieve”, but the point is that I dis­agree about our rea­sons for re­ject­ing the claim that you’re a BB.

Note that ac­cord­ing to your rea­son­ing, any the­ory which says you’re a BB must give us a uniform dis­tri­bu­tion for all pos­si­ble ex­pe­riences. So ra­tio­nally com­ing to as­sign high prob­a­bil­ity to that the­ory seems nearly im­pos­si­ble if your ex­pe­rience is not ac­tu­ally ran­dom.

• My rea­son for re­ject­ing the claim of BB is that the claim is use­less—and I am quite sure that is my rea­son. I would definitely re­ject it for that rea­son even if I had an ar­gu­ment that seemed ex­tremely con­vinc­ing to me that there is a 95% chance I am a BB.

A the­ory that says I am a BB can­not as­sign a prob­a­bil­ity to any­thing, not even by giv­ing a uniform dis­tri­bu­tion. A BB the­ory is like a the­ory that says, “you are always wrong.” You can­not get any prob­a­bil­ity as­sign­ments from that, since as soon as you bring them up, the the­ory will say your as­sign­ments are wrong. In a similar way, a BB the­ory im­plies that you have never learned or stud­ied prob­a­bil­ity the­ory. So you do not know whether prob­a­bil­ities should sum to 100% (or to any similar nor­mal­ized re­sult) or any­thing else about prob­a­bil­ity the­ory.

As I said, BB the­ory is use­less—and part of its use­less­ness is that it can­not im­ply any con­clu­sions, not even any kind of prior over your ex­pe­riences.

1. I’m us­ing prob­a­bil­ity to rep­re­sent per­sonal un­cer­tainty, and I am not a BB. So I think I can le­gi­t­i­mately as­sign the the­ory a dis­tri­bu­tion to rep­re­sent un­cer­tainty, even if be­liev­ing the the­ory would make me more un­cer­tain than that. (Note that if we try to in­clude rad­i­cal log­i­cal un­cer­tainty in the dis­tri­bu­tion, it’s hard to ar­gue the num­bers would change. If a uniform dis­tri­bu­tion “is wrong,” how would I know what I should be as­sign­ing high prob­a­bil­ity to?)

2. I don’t think you as­sign a 95% chance to be­ing a BB, or even that you could do so with­out se­vere men­tal ill­ness. Be­cause for starters:

3. Hu­mans who re­ally be­lieve their ac­tions mean noth­ing don’t say, “I’ll just pre­tend that isn’t so.” They stop func­tion­ing. Per­haps you meant the bar is liter­ally 5% for mean­ingful ac­tion, and if you thought it was 0.1% you’d stop typ­ing?

4. I would agree if you’d said that evolu­tion hard­wired cer­tain premises or ap­prox­i­mate pri­ors into us ‘be­cause it was use­ful’ to evolu­tion. I do not be­lieve that hu­mans can use the sort of pas­calian rea­son­ing you claim to use here, not when the is­sue is BB or not BB. Nor do I be­lieve it is in any way nec­es­sary. (Also, the link doesn’t make this clear, but a true prior would need to in­clude con­di­tional prob­a­bil­ities un­der all the­o­ries be­ing con­sid­ered. Hu­mans, too, start life with a sketch of con­di­tional prob­a­bil­ities.)

• META: I made a com­ment in dis­cus­sion about the ar­ti­cle and add there my con­sid­er­a­tion why it is not bad to be BB, may be we could move dis­cus­sion there?

• If I wake up in a red room af­ter the coin toss, I’m go­ing to as­sume that there are a billion of us in red rooms, and one in a green room, and vice versa. That way a billion of me are as­sum­ing the truth, and one is not. So chances are (Billion-and-one out of billion) that this iter­a­tion of me is as­sum­ing the truth.

We’ll each have to ac­cept, of course, the pos­si­bil­ity of be­ing wrong, but hey, it’s still the best op­tion for me al­to­gether.

To­mor­row I’ll talk about what sort of trou­ble you run into if you re­ply “a billion to one”.

Trou­ble? We’ll take it on to­gether, be­cause ev­ery “I” is in this team. [ap­plause]

• Eliezer_Yud­kowsky wrote: “I want to re­ply, “But then most peo­ple don’t have ex­pe­riences this or­dered, so find­ing my­self with an or­dered ex­pe­rience is, on your hy­poth­e­sis, very sur­pris­ing.”

One will feel sur­prised by win­ning a mil­lion dol­lar on the lot­tery too, but that doesn’t mean that it would be ra­tio­nal to as­sume that just be­cause one won a mil­lion dol­lar on the lot­tery most peo­ple win a mil­lion dol­lar on the lot­tery.

Maybe most of us ex­ist only for a frac­tion of a sec­ond, but in that case, what is there to lose by (prob­a­bly falsely, but maybe maybe maybe cor­rectly) as­sum­ing that we ex­ist much longer than that, and liv­ing ac­cord­ingly? There is po­ten­tially some­thing to gain by as­sum­ing that, and noth­ing to lose, so it may very well be ra­tio­nal to as­sume that, even though it is very un­likely to be the case!

• How much re­sources should you de­vote to the next day vs. the next month vs. the next year? If each ad­di­tional sec­ond of ex­is­tence is a vast im­prob­a­bil­ity, for sim­plic­ity you may as­sume a few mo­ments of ex­is­tence, but no longer.

If, OTOH, once you live, say, 3 sec­onds, it’s as likely as not that you’ll live a few more years—there’s some sort of bi­modal­ity—then such a stance is jus­tified. Bi­modal­ity would only work if there were some sort of the­o­ret­i­cal jus­tifi­ca­tion.

• If ev­ery­thing that can hap­pen, hap­pens (sooner or later) - which is as­sumed

• there will be con­tinu­a­tions (not nec­es­sar­ily at the same spot in space­time, but some­where) of what­ever brief life I have for a few sec­onds or planck times now, and con­tinu­a­tions of those con­tinu­a­tions too, and so on, with­out an end, mean­ing I’m im­mor­tal, given that iden­tity is not de­pen­dent on the sur­vival of any par­tic­u­lar atoms (as op­posed to pat­terns in which atoms, any atoms, are ar­ranged, any­where). This means that what I achieve dur­ing the short ex­is­tences that are most com­mon in the uni­verse will only be parts of what I will have achieved in the long run, when all those short ex­is­tences are “put to­gether” (or thought of as one con­tin­u­ous life). There­fore, I should care about what my life will be like in a few years, in a few cen­turies, in a few googol years, et cetera, to­gether, that is, my whole in­finitely long fu­ture, more than I should care about any one short ex­is­tence at any one place in space­time. If I can max­i­mize my over­all hap­piness over my in­finite life only by ac­cept­ing a huge lot of suffer­ing for a hun­dred years be­gin­ning now, I should do just that (if I’m a ra­tio­nal ego­ist).

My life may very well con­sist of pre­dom­i­nantly ex­tremely short-lived Boltz­mann-brains, but I don’t die just be­cause these Boltz­mann-brains die off one by one at a ter­rific rate.

• I said “how much” not “if”. My point is that you should care vastly more about the next few sec­onds then a few years from now.

• I am a Boltz­mann brain athe­ist. ;)

• This one always re­minds me of flies re­peat­edly slam­ming their heads against a closed win­dow rather than to face the fact that there is some­thing fun­da­men­tally wrong with some of our un­proven as­sump­tions about ther­mo­dy­nam­ics and the big bang.

• ...care to ex­plain fur­ther why we’re wrong?

• Do you re­ally want to see the an­swer?

• I’d like to be the first to point out that this post dou­bles as a very long (and very un­de­served) re­sponse to this post.

• Boltz­mann brains are a prob­lem even if you’re a 50 per­center. Many fixed mod­els of physics pro­duce lots of BB. Maybe you can solve this with a com­plex­ity prior, that BB are less real be­cause they’re hard to lo­cate. But hav­ing done this, it’s not clear to me how this in­ter­acts with Sleep­ing Beauty. It may well be that such a prior also fa­vors wor­lds with fewer BB, that is, wor­lds with fewer ob­servers, but more prop­erly weighted ob­servers.

(ETA: I read the post back­wards, so that was a non se­quitur, but I do think the ap­pli­ca­tion of an­throp­ics to BB is not at all clear. I agree with Eliezer that it looks like it helps, but it might well make it worse.)

• Here’s a logic puz­zle that may have some vague rele­vance to the topic.

You and two team­mates are all go­ing to be taken into sep­a­rate rooms and have flags put on your heads. Each flag has a 50% chance of be­ing black or be­ing white. None of you can see what color your own flag is, but you will be told what color flags your two team­mates are wear­ing. Be­fore each of you leave your re­spec­tive rooms, you may make a guess as to what color flag you your­self are wear­ing. If at least one of you guesses cor­rectly and no­body guesses in­cor­rectly, you all win. If any­one makes an in­cor­rect guess, or if all three of you de­cide not to guess, you all lose.

If one of you guesses ran­domly and the other two choose not to guess, you have a 50% chance of win­ning. Even though it would seem that know­ing what color your team­mates’ flags are tells you noth­ing about your own, there is a way for your team to win this game more than half the time. How can it be done?

• My at­tempt at a solu­tion: if you see two flags of the same color, guess the op­po­site color, oth­er­wise don’t guess. This wins 75% of the time.

Lemma 1: it’s im­pos­si­ble that ev­ery­one chooses not to guess. Proof: some two peo­ple have the same color, be­cause there are three peo­ple and only two col­ors.

Lemma 2: the chance of los­ing is 25%. Proof: by lemma 1, the team can only lose if some­one guessed wrong, which im­plies all three col­ors are the same, which is 2 out of 8 pos­si­ble as­sign­ments.

This leaves open the ques­tion of whether this strat­egy is op­ti­mal. I highly sus­pect it is, but don’t have a proof yet.

UPDATE: here’s a proof I just found on the In­ter­net, it’s el­e­gant but not easy to come up with. I won­der if there’s a sim­pler one.

• It’s a tricky cat­e­gory of ques­tion alright—you can make it even trick­ier by vary­ing the pro­ce­dure by which the copies are cre­ated.

The best an­swer I’ve come up with so far is to just max­i­mize to­tal util­ity. Thus, I choose the billion to one side be­cause it max­i­mizes the num­ber of copies of me that hold true be­liefs. I will be in­ter­ested to see whether my pro­ce­dure with­stands your ar­gu­ment in the other di­rec­tion.

(And of course there is the other com­pli­ca­tion that strictly speak­ing the prob­a­bil­ity of a log­i­cal coin is ei­ther zero or one, we just don’t know which. But even though such log­i­cal un­cer­tain­ties are not strictly speak­ing mat­ters of prob­a­bil­ity, it is some­times most use­ful to treat them as such in a par­tic­u­lar con­text.)

• Well, I don’t think the anal­ogy holds up all that well. In the coin flip story we “know” that there was a time be­fore the uni­verse with two equally likely rules for the uni­verse. In the world as it is, AFAIK we re­ally don’t have a com­plete, in­ter­nally con­sis­tent set of phys­i­cal laws fully ca­pa­ble of ex­plain­ing the uni­verse as we ex­pe­rience it, let alone a com­plete set of all of them.

The idea that we live in some sort of low en­tropy bub­ble which spon­ta­neously formed in a high en­tropy greater uni­verse seems pretty im­plau­si­ble for the rea­sons you de­scribe. But I don’t think we can come to a con­clu­sion from this sig­nifi­cantly stronger than “there’s a lot we haven’t figured out yet”.

• Cur­rent physics mod­els get around that ques­tion any­ways. The way our brains work, there is more en­tropy af­ter a mem­ory is burned than be­fore. Thus, time seems to flow from low to high en­tropy to us. If en­tropy was flow­ing the an­other di­rec­tion, than our brains would think of an­other di­rec­tion as past. The laws of ther­mo­dy­nam­ics are a side effect of how our brains pro­cess time.

Thus we can have low en­tropy → high en­tropy with­out a shit ton of Boltz­mann Brains.

• The laws of ther­mo­dy­nam­ics arise in prac­ti­cally any re­versible cel­lu­lar au­toma­ton with a tem­per­a­ture—they are not to do with brains.

• The laws of ther­mo­dy­nam­ics arise in our anal­y­sis of prac­ti­cally any re­versible cel­lu­lar au­toma­ton with a tem­per­a­ture.

• 8 Sep 2009 15:13 UTC
−1 points

Non-sci­en­tific hy­poth­e­sis: The uni­verse’s ini­tial state was a sin­gu­lar­ity as pos­tu­lated by the big bang the­ory, a state of min­i­mal en­tropy. As per ther­mo­dy­nam­ics, en­tropy has been, is, and will be in­creas­ing steadily from that point un­til pre­cisely 10^40 years from the Big Bang, at which point the uni­verse will cease to ex­ist with no warn­ing what­so­ever.

Though this hy­poth­e­sis is very ar­bi­trary (the figure “10^40 years” has roughly 300 bits of en­tropy), I figure it ex­plains our ob­ser­va­tions at least 300 bits bet­ter than the “vanilla heat death hy­poth­e­sis”: The uni­verse’s ini­tial state was . . . a state of min­i­mal en­tropy. It reaches max­i­mal en­tropy very quickly com­pared to the amount of time it spends in max­i­mal en­tropy, and as a re­sult effec­tively has no or­der.

• Are past events a guide to the fu­ture—and if so, why?

That seems to be the topic here:

http://​​en.wikipe­dia.org/​​wiki/​​Prob­lem_of_induction

• Re­gard­ing this post’s score (as of now, −3 points): was it re­ally that harm­ful?

• was it re­ally that harm­ful?

Yes, even more so the ten­dency to make like com­ments.

• FWIW, if pun­ish­ment was in­tended, it is un­likely to be effec­tive: I pretty-much just ig­nore the Less-Wrong karma sys­tem—partly on the grounds that crit­ics should heed crit­i­cism the least.

• In most cases, my com­plaint is that your com­ments lack rele­vance or sub­stance, which has noth­ing to do with dis­agree­ment.