SIA won’t doom you

Katja Grace has just presented an ingenious model, claiming that SIA combined with the great filter generates its own variant of the doomsday argument. Robin echoed this on Overcoming Bias. We met soon after Katja had come up with the model, and I signed up to it, saying that I could see no flaw in the argument.

Unfortunately, I erred. The argument does not work in the form presented.

First of all, there is the issue of time dependence. We are not just a human level civilization drifting through the void in blissful ignorance about our position in the universe. We know (approximately) the age of our galaxy, and the time elapsed since the big bang.

How is this relevant? It is relevant because all arguments about the great filter are time-dependent. Imagine we had just reached consciousness and human-level civilization, by some fluke, two thousand years after the creation of our galaxy, by an evolutionary process that took two thousand years. We see no aliens around us. In this situation, we have no reason to suspect any great filter; if we asked ourselves “are we likely to be the first civilization to reach this stage?” then the answer is probably yes. No evidence for a filter.

Imagine, instead, that we had reached consciousness a trillion years into the life of our galaxy, again via an evolutionary process that took two thousand years, and we see no aliens or traces of aliens. Then the evidence for a filter is overwhelming; something must have stopped all those previous likely civilizations from emerging into the galactic plane.

So neither of these civilizations can be included in our reference class (indeed, the second one can only exist if we ourselves are filtered!). So the correct reference class to use is not “the class of all potential civilizations in our galaxy that have reached our level of technological advancement and seen no aliens”, but “the class of all potential civilizations in our galaxy that have reached our level of technological advancement at around the same time as us and seen no aliens”. Indeed, SIA, once we update on the present, cannot tell us anything about the future.

But there’s more. Let us lay aside, for the moment, the issue of time dependence. Let us instead consider the diagrams in Katja’s post as if the vertical axis were time: all potential civilizations start at the same point, and progress at the same rate. Is there still a role for SIA?

The answer is… it depends. It depends entirely on your choice of prior. To illustrate this, consider this pair of early-filter worlds:

To simplify, I’ve flattened the diagram, and now consider only two states: human civilizations and basic lifeforms. And here are some late filter worlds:

Assign an equal prior of (1/​4) to each one of these world. Then the prior probability of living in a late filter world is (1/​4+1/​4)=1/​2, and the same holds for early filter worlds.

Let us now apply SIA. These boost the probability of Y and B at the expense of A and X. Y and B end up having a probability 13, while A and X end up having a probability 16. The postiori probability of living in a late filter world is (1/​3+1/​6)=1/​2, and the same goes for early filter worlds. Applying SIA has not changed the odds of late versus early filters.

But people might feel this is unfair; that I have loaded the dice, especially by giving world Y the same prior as the others. It has too many primitive lifeforms; it’s too unlikely. Fine then; let us give prior probabilities as follows:

X
Y
A
B
230
130 1830 930

This world does not exactly over-weight the chance of human survival! The prior probability of a late filter is (18/​30+9/​30)=9/​10, while that of an early filter is 110. But now let us consider how SIA changes those odds: Y and B are weighted by a factor of two, while X and A are weighted by a factor of one. The postiori probabilities are thus:

X
Y
A
B
120
120 920 920

The postiori probability of a late filter is (9/​20+9/​20)=9/​10, same as before: again SIA has not changed the probability of where the filter is. But it gets worse; if, for instance, we had started with the priors:

X
Y
A
B
130
230 1830 930

This is the same as before, but with X and Y inversed. The early filter still has only one chance in ten, a priori. But now if we apply SIA, the postiori odds of X and Y are 141 and 441, totalling of 541 > 110. Here applying SIA has increased our chances of survival!

In general there are a lot of reasonable priors over possible worlds were SIA makes little or no difference to the odds of the great filter, either way.

Conclusion: Do I believe that this has demonstrated that the SIA/​great filter argument is nonsense? No, not at all. I think there is a lot to be gained from analysing the argument, and I hope that Katja or Robin or someone else—maybe myself, when I get some spare time, one of these centuries—sits down and goes through various scenarios, looks at classes of reasonable priors and evidence, and comes up with a conclusion about what exactly SIA says about the great filter, the strength of the effect, and how sensitive it is to prior changes. I suspect that when the dust settles, SIA will still slightly increase the chance of doom, but that the effect will be minor.

Having just saved humanity, I will now return to more relaxing pursuits.