Self-Indication Assumption—Still Doomed

I re­cently posted a dis­cus­sion ar­ti­cle on the Dooms­day Ar­gu­ment (DA) and Strong Self-Sam­pling As­sump­tion. See http://​​less­wrong.com/​​lw/​​9im/​​dooms­day_ar­gu­ment_with_strong_self­sam­pling/​​

This new post is re­lated to an­other part of the liter­a­ture con­cern­ing the Dooms­day Ar­gu­ment—the Self Indi­ca­tion As­sump­tion or SIA. For those not fa­mil­iar, the SIA says (roughly) that I would be more likely to ex­ist if the world con­tains a large num­ber of ob­servers. So, when tak­ing into ac­count the ev­i­dence that I ex­ist, this should shift my prob­a­bil­ity as­sess­ments to­wards mod­els of the world with more ob­servers.

Fur­ther, on first glance, it looks like the SIA shift can be ar­ranged to ex­actly coun­ter­act the effect of the DA shift. Con­sider, for in­stance, these two hy­pothe­ses:

H1. Across all of space time, there is just one civ­i­liza­tion of ob­servers (hu­mans) and a to­tal of 200 billion ob­servers.

H2. Across all of space time, there is just one civ­i­liza­tion of ob­servers (hu­mans) and a to­tal of 200 billion trillion ob­servers.

Sup­pose I had as­signed a prior prob­a­bil­ity ra­tio p_r = P(H1)/​P(H2) be­fore con­sid­er­ing ei­ther SIA or the DA. Then when I ap­ply the SIA, this ra­tio will shrink by a fac­tor of a trillion i.e. I’ve be­come much more con­fi­dent in hy­poth­e­sis H2. But then when I ob­serve I’m roughly the 100 billionth hu­man be­ing, and ap­ply the DA, the ra­tio ex­pands back by ex­actly the same fac­tor of a trillion, since this ob­ser­va­tion is much more likely un­der H1 than un­der H2. So my prob­a­bil­ity ra­tio re­turns to p_r. I should not make any pre­dic­tions about “Doom Soon” un­less I already be­lieved them at the out­set, for other rea­sons.

Now I won’t dis­cuss here whether the SIA is jus­tified or not; my main con­cern is whether it ac­tu­ally helps to coun­ter­act the Dooms­day Ar­gu­ment. And it seems quite clear to me that it doesn’t. If we choose to ap­ply the SIA at all, then it will in­stead over­whelming favour a hy­poth­e­sis like H3 be­low over ei­ther H1 or H2:

H3. Across all of space time, there are in­finitely many civ­i­liza­tions of ob­servers, and in­finitely many ob­servers in to­tal.

In short, by ap­ply­ing the SIA we wipe out from con­sid­er­a­tion all the finite-world mod­els, and then only have to look at the in­finite ones (e.g. mod­els with an in­finite uni­verse, or with in­finitely many uni­verses). But now, con­sider that H3 has two sub-mod­els:

H3.1. Across all of space time, there are in­finitely many civ­i­liza­tions of ob­servers, but the mean num­ber of ob­servers per civ­i­liza­tion (tak­ing a suit­able limit con­struc­tion to define the mean) is 200 billion ob­servers.

H3.2. Across all of space time, there are in­finitely many civ­i­liza­tions of ob­servers, but the mean num­ber of ob­servers per civ­i­liza­tion (tak­ing the same limit con­struc­tion) is 200 billion trillion ob­servers.

No­tice that while SIA is in­differ­ent be­tween these sub-cases (since both con­tain the same num­ber of ob­servers), it seems clear that DA still greatly favours H3.1 over H3.2. What­ever our prior ra­tio r’ = P(H3.1)/​P(H3.2), DA raises that ra­tio by a trillion, and so the com­bi­na­tion of SIA and DA also raises that ra­tio by a trillion. SIA doesn’t stop the shift.

Worse still, the con­clu­sion of the DA has now be­come far *stronger*, since it seems that the only way for H3.1 to hold is if there is some form of “Univer­sal Doom” sce­nario. Loosely, pretty much ev­ery one of those in­finitely-many civ­i­liza­tions will have to ter­mi­nate it­self be­fore man­ag­ing to ex­pand away from its home planet.

Looked at more care­fully, there is some prob­a­bil­ity of a civ­i­liza­tion ex­pand­ing p_e which is con­sis­tent with H3.1 but it has to be uni­mag­in­ably tiny. If the pop­u­la­tion ra­tio of an ex­panded civ­i­liza­tion to a a non-ex­panded one is R_e, then H3.1 re­quires that p_e < 1/​R_e. But val­ues of R_e > trillion look right; in­deed val­ues of R_e > 10^24 (a trillion trillion) look plau­si­ble, which then forces p_e < 10^-12 and plau­si­bly < 10^-24. The be­liever in the SIA has to be a re­ally strong Doomer to get this to work!

By con­trast the stan­dard DA doesn’t have to be quite so doomer­ish. It can work with a rather higher prob­a­bil­ity p_e of ex­pan­sion and avoid­ing doom, as long as the world is finite and the to­tal num­ber of ac­tual civ­i­liza­tions is less than 1 /​ p_e. As an ex­am­ple, con­sider:

H4. There are 1000 civ­i­liza­tions of ob­servers in the world, and each has a prob­a­bil­ity of 1 in 10000 of ex­pand­ing be­yond its home planet. Con­di­tional on a civ­i­liza­tion not ex­pand­ing, its ex­pected num­ber of ob­servers is 200 billion.

This hy­poth­e­sis seems to be pretty con­sis­tent with our cur­rent ob­ser­va­tions (ob­serv­ing that we are the 100 billionth hu­man be­ing). It pre­dicts that—with 90% prob­a­bil­ity—all ob­servers will find them­selves on the home planet of their civ­i­liza­tion. Since this H4 pre­dic­tion ap­plies to all ob­servers, we don’t ac­tu­ally have to worry about whether we are a “ran­dom” ob­server or not; the pre­dic­tion still holds. The hy­poth­e­sis also pre­dicts that, while the prospect of ex­pan­sion will ap­pear just about at­tain­able for a civ­i­liza­tion, it won’t in fact hap­pen.

P.S. With a bit of re-scal­ing of the num­bers, this post also works with ob­ser­va­tions or ob­server-mo­ments, not just ob­servers. See my pre­vi­ous post for more on this.