2 Anthropic Questions

I have just finished reading the section on anthropic bias in Nassim Taleb’s book, The Black Swan. In general, the book is interesting to compare to the sort of things I read on Less Wrong; its message is largely very similar, except less Bayesian (and therefore less formal—at times slightly anti-formal, arguing against misleading math).

Two points concerning anthropic weirdness.

First:

If we win the lottery, should we really conclude that we live in a holodeck (or some such)? From real-life anthropic weirdness:

Pity those poor folk who actually win the lottery! If the hypothesis “this world is a holodeck” is normatively assigned a calibrated confidence well above 10-8, the lottery winner now has incommunicable good reason to believe they are in a holodeck. (I.e. to believe that the universe is such that most conscious observers observe ridiculously improbable positive events.)

It seems to me that the right way of approaching the question is: before buying the lottery ticket, what belief-forming strategy would we prefer ourselves to have? (Ignore the issue of why we buy the ticket, of course.) Or, slightly different: what advice would you give to other people (for example, if you’re writing a book on rationality that might be widely read)?

“Common sense” says that it would be quite silly to start believing some strange theory, just because I win the lottery. However, Bayes says that if we assign greater than 10-8 prior probability to “strange” explanations of getting a winning lottery ticket, then we should prefer them. In fact, we may want to buy a lottery ticket to test those theories! (This would be a very sensible test, which would strongly tend to give the right result.)

However, as a society, we would not want lottery-winners to go crazy. Therefore, we would not want to give the advice “if you win, you should massively update your probabilities”.

(This is similar to the idea that we might be persuaded to defect in Prisoner’s Dilemma if we are maximizing our personal utility, but if we are giving advice about rationality to other people, we should advise them that cooperating is the optimal strategy. In a somewhat unjustified leap, I suppose we should take the advice we would give to others in such matters. But I suppose that position is already widely accepted here.)

On the other hand, if we were in a position to give advice to people who might really be living in a simulation, it would suddenly be good advice!

Second:

Taleb discusses an interesting example of anthropic bias:

Apply this reasoning to the following question: Why didn’t the bubonic plague kill more people? People will supply quantities of cosmetic explanations involving theories about the intensity of the plague and “scientific models” of epidemics. Now, try the weakened causality argument that I have just emphasized in this chapter: had the bubonic plague killed more people, the observers (us) would not be here to observe. So it may not neccessarily be the property of diseases to spare us humans.

You’ll have to read the chapter if you want to know exactly what “argument” is being discussed, but the general point is (hopefully) clear from this passage. If an event was a necessary prerequisite for our existence, then we should not take our survival of that event as evidence for a high probability of survival of such events. If we remember surviving a car crash, we should not take that to increase our estimates for surviving a car crash. (Instead, we should look at other car crashes.)

This conclusion is somewhat troubling (as Taleb admits). It means that the past is fundamentally different from the future! The past will be a relatively “safe” place, where every event has led to our survival. The future is alien and unforgiving. As is said in the story The Hero With A Thousand Chances:

“The Counter-Force isn’t going to help you this time. No hero’s luck. Nothing but creativity and any scraps of real luck—and true random chance is as liable to hurt you as the Dust. Even if you do survive this time, the Counter-Force won’t help you next time either. Or the time after that. What you remember happening before—will not happen for you ever again.”

Now, Taleb is saying that we are that hero. Scary, right?

On the other hand, it seems reasonable to be skeptical of a view which presents difficulties generalizing from the past to the future. So. Any opinions?