2 Anthropic Questions

I have just finished read­ing the sec­tion on an­thropic bias in Nas­sim Taleb’s book, The Black Swan. In gen­eral, the book is in­ter­est­ing to com­pare to the sort of things I read on Less Wrong; its mes­sage is largely very similar, ex­cept less Bayesian (and there­fore less for­mal—at times slightly anti-for­mal, ar­gu­ing against mis­lead­ing math).

Two points con­cern­ing an­thropic weird­ness.

First:

If we win the lot­tery, should we re­ally con­clude that we live in a holodeck (or some such)? From real-life an­thropic weird­ness:

Pity those poor folk who ac­tu­ally win the lot­tery! If the hy­poth­e­sis “this world is a holodeck” is nor­ma­tively as­signed a cal­ibrated con­fi­dence well above 10-8, the lot­tery win­ner now has in­com­mu­ni­ca­ble good rea­son to be­lieve they are in a holodeck. (I.e. to be­lieve that the uni­verse is such that most con­scious ob­servers ob­serve ridicu­lously im­prob­a­ble pos­i­tive events.)

It seems to me that the right way of ap­proach­ing the ques­tion is: be­fore buy­ing the lot­tery ticket, what be­lief-form­ing strat­egy would we pre­fer our­selves to have? (Ig­nore the is­sue of why we buy the ticket, of course.) Or, slightly differ­ent: what ad­vice would you give to other peo­ple (for ex­am­ple, if you’re writ­ing a book on ra­tio­nal­ity that might be widely read)?

“Com­mon sense” says that it would be quite silly to start be­liev­ing some strange the­ory, just be­cause I win the lot­tery. How­ever, Bayes says that if we as­sign greater than 10-8 prior prob­a­bil­ity to “strange” ex­pla­na­tions of get­ting a win­ning lot­tery ticket, then we should pre­fer them. In fact, we may want to buy a lot­tery ticket to test those the­o­ries! (This would be a very sen­si­ble test, which would strongly tend to give the right re­sult.)

How­ever, as a so­ciety, we would not want lot­tery-win­ners to go crazy. There­fore, we would not want to give the ad­vice “if you win, you should mas­sively up­date your prob­a­bil­ities”.

(This is similar to the idea that we might be per­suaded to defect in Pri­soner’s Dilemma if we are max­i­miz­ing our per­sonal util­ity, but if we are giv­ing ad­vice about ra­tio­nal­ity to other peo­ple, we should ad­vise them that co­op­er­at­ing is the op­ti­mal strat­egy. In a some­what un­jus­tified leap, I sup­pose we should take the ad­vice we would give to oth­ers in such mat­ters. But I sup­pose that po­si­tion is already widely ac­cepted here.)

On the other hand, if we were in a po­si­tion to give ad­vice to peo­ple who might re­ally be liv­ing in a simu­la­tion, it would sud­denly be good ad­vice!

Se­cond:

Taleb dis­cusses an in­ter­est­ing ex­am­ple of an­thropic bias:

Ap­ply this rea­son­ing to the fol­low­ing ques­tion: Why didn’t the bubonic plague kill more peo­ple? Peo­ple will sup­ply quan­tities of cos­metic ex­pla­na­tions in­volv­ing the­o­ries about the in­ten­sity of the plague and “sci­en­tific mod­els” of epi­demics. Now, try the weak­ened causal­ity ar­gu­ment that I have just em­pha­sized in this chap­ter: had the bubonic plague kil­led more peo­ple, the ob­servers (us) would not be here to ob­serve. So it may not nec­ces­sar­ily be the prop­erty of dis­eases to spare us hu­mans.

You’ll have to read the chap­ter if you want to know ex­actly what “ar­gu­ment” is be­ing dis­cussed, but the gen­eral point is (hope­fully) clear from this pas­sage. If an event was a nec­es­sary pre­req­ui­site for our ex­is­tence, then we should not take our sur­vival of that event as ev­i­dence for a high prob­a­bil­ity of sur­vival of such events. If we re­mem­ber sur­viv­ing a car crash, we should not take that to in­crease our es­ti­mates for sur­viv­ing a car crash. (In­stead, we should look at other car crashes.)

This con­clu­sion is some­what trou­bling (as Taleb ad­mits). It means that the past is fun­da­men­tally differ­ent from the fu­ture! The past will be a rel­a­tively “safe” place, where ev­ery event has led to our sur­vival. The fu­ture is alien and un­for­giv­ing. As is said in the story The Hero With A Thou­sand Chances:

“The Counter-Force isn’t go­ing to help you this time. No hero’s luck. Noth­ing but cre­ativity and any scraps of real luck—and true ran­dom chance is as li­able to hurt you as the Dust. Even if you do sur­vive this time, the Counter-Force won’t help you next time ei­ther. Or the time af­ter that. What you re­mem­ber hap­pen­ing be­fore—will not hap­pen for you ever again.”

Now, Taleb is say­ing that we are that hero. Scary, right?

On the other hand, it seems rea­son­able to be skep­ti­cal of a view which pre­sents difficul­ties gen­er­al­iz­ing from the past to the fu­ture. So. Any opinions?