Self-Indication Assumption—Still Doomed

I recently posted a discussion article on the Doomsday Argument (DA) and Strong Self-Sampling Assumption. See http://​​lesswrong.com/​​lw/​​9im/​​doomsday_argument_with_strong_selfsampling/​​

This new post is related to another part of the literature concerning the Doomsday Argument—the Self Indication Assumption or SIA. For those not familiar, the SIA says (roughly) that I would be more likely to exist if the world contains a large number of observers. So, when taking into account the evidence that I exist, this should shift my probability assessments towards models of the world with more observers.

Further, on first glance, it looks like the SIA shift can be arranged to exactly counteract the effect of the DA shift. Consider, for instance, these two hypotheses:

H1. Across all of space time, there is just one civilization of observers (humans) and a total of 200 billion observers.

H2. Across all of space time, there is just one civilization of observers (humans) and a total of 200 billion trillion observers.

Suppose I had assigned a prior probability ratio p_r = P(H1)/​P(H2) before considering either SIA or the DA. Then when I apply the SIA, this ratio will shrink by a factor of a trillion i.e. I’ve become much more confident in hypothesis H2. But then when I observe I’m roughly the 100 billionth human being, and apply the DA, the ratio expands back by exactly the same factor of a trillion, since this observation is much more likely under H1 than under H2. So my probability ratio returns to p_r. I should not make any predictions about “Doom Soon” unless I already believed them at the outset, for other reasons.

Now I won’t discuss here whether the SIA is justified or not; my main concern is whether it actually helps to counteract the Doomsday Argument. And it seems quite clear to me that it doesn’t. If we choose to apply the SIA at all, then it will instead overwhelming favour a hypothesis like H3 below over either H1 or H2:

H3. Across all of space time, there are infinitely many civilizations of observers, and infinitely many observers in total.

In short, by applying the SIA we wipe out from consideration all the finite-world models, and then only have to look at the infinite ones (e.g. models with an infinite universe, or with infinitely many universes). But now, consider that H3 has two sub-models:

H3.1. Across all of space time, there are infinitely many civilizations of observers, but the mean number of observers per civilization (taking a suitable limit construction to define the mean) is 200 billion observers.

H3.2. Across all of space time, there are infinitely many civilizations of observers, but the mean number of observers per civilization (taking the same limit construction) is 200 billion trillion observers.

Notice that while SIA is indifferent between these sub-cases (since both contain the same number of observers), it seems clear that DA still greatly favours H3.1 over H3.2. Whatever our prior ratio r’ = P(H3.1)/​P(H3.2), DA raises that ratio by a trillion, and so the combination of SIA and DA also raises that ratio by a trillion. SIA doesn’t stop the shift.

Worse still, the conclusion of the DA has now become far *stronger*, since it seems that the only way for H3.1 to hold is if there is some form of “Universal Doom” scenario. Loosely, pretty much every one of those infinitely-many civilizations will have to terminate itself before managing to expand away from its home planet.

Looked at more carefully, there is some probability of a civilization expanding p_e which is consistent with H3.1 but it has to be unimaginably tiny. If the population ratio of an expanded civilization to a a non-expanded one is R_e, then H3.1 requires that p_e < 1/​R_e. But values of R_e > trillion look right; indeed values of R_e > 10^24 (a trillion trillion) look plausible, which then forces p_e < 10^-12 and plausibly < 10^-24. The believer in the SIA has to be a really strong Doomer to get this to work!

By contrast the standard DA doesn’t have to be quite so doomerish. It can work with a rather higher probability p_e of expansion and avoiding doom, as long as the world is finite and the total number of actual civilizations is less than 1 /​ p_e. As an example, consider:

H4. There are 1000 civilizations of observers in the world, and each has a probability of 1 in 10000 of expanding beyond its home planet. Conditional on a civilization not expanding, its expected number of observers is 200 billion.

This hypothesis seems to be pretty consistent with our current observations (observing that we are the 100 billionth human being). It predicts that—with 90% probability—all observers will find themselves on the home planet of their civilization. Since this H4 prediction applies to all observers, we don’t actually have to worry about whether we are a “random” observer or not; the prediction still holds. The hypothesis also predicts that, while the prospect of expansion will appear just about attainable for a civilization, it won’t in fact happen.

P.S. With a bit of re-scaling of the numbers, this post also works with observations or observer-moments, not just observers. See my previous post for more on this.