Of all the SIA-doomsdays in the all the worlds...

Ideas developed with Paul Almond, who kept on flogging a dead horse until it started showing signs of life again.

Doomsday, SSA and SIA

Imagine there’s a giant box filled with people, and clearly labelled (inside and out) “(year of some people’s lord) 2013”. There’s another giant box somewhere else in space-time, labelled “2014″. You happen to be currently in the 2013 box.

Then the self-sampling assumption (SSA) produces the doomsday argument. It works approximately like this: SSA has a preference for universe with smaller numbers of observers (since it’s more likely that you’re one-in-a-hundred than one-in-a-billion). Therefore we expect that the number of observers in 2014 is smaller than we would otherwise “objectively” believe: the likelihood of doomsday is higher than we thought.

What about the self-indication assumption (SIA) - that makes the doomsday argument go away, right? Not at all! SIA has no effect on the number of observers expected in the 2014, but increases the expected number of observers in 2013. Thus we still expect that the number of observers in 2014 to be lower than we otherwise thought. There’s an SIA doomsday too!

Enter causality

What’s going on? SIA was supposed to defeat the doomsday argument! What happens is that I’ve implicitly cheated—by naming the boxes “2013” and “2014″, I’ve heavily implied that these “boxes” figuratively correspond two subsequent years. But then I’ve treated them as independent for SIA, like two literal distinct boxes.

In reality, of course, the contents of two years are not independent. Causality connects the two: it’s much more likely that there are many observers in 2014, if there were many in 2013. Indeed, most of the observers will exist in both years (and there will be some subtle SSA issues of “observer moments” that we’re eliding here—does a future you count as the same observer or a different one). So causality removes the independence assumption, and though there may be some interesting SIA effects on changes in growth rates, we won’t see a real SIA doomsday.

Exit causality

But is causality itself a justified assumption? To some extent it is: some people who think they live in 2013/​2014 will certainly be in a causal relationship with people who think they live in 2014/​2013.

But many will not! What of Boltzmann brains? What of Boltzmann worlds—brief worlds that last less than a year?

What about deluded worlds: worlds where the background data coincidentally (or conspiratorially) imply that we exist on the planet Earth around the Sun, in 2013 - but where we really exist inside a star circling a space-station a few seconds after the seventh toroidal big bang, or something, and will soon wake up to this fact. What about simply deluded people—may we be alien nutjobs, dreaming we’re humans? And of great relevance to arguments often presented here, what if we are short-term simulations run by some advanced species?

All these are possible, with non-zero probability (the probability of simulations may be very high indeed, under some assumptions). All of these break the causal link to 2014 or other future events. And hence all of them allow a genuine SIA doomsday argument to flourish: we should expect that seeing 2014 is less likely than is objectively implied, given that we think we are in the year 2013.

Is anthropic decision theory (ADT) subject to the same doomsday argument? In that form, no. In any situation where causality breaks down, your decisions cease to have consequences, and ADT tosses them aside. But more complicated preferences or setups could bring doomsday into ADT as well.