I think it’s because of our priors. In the normal city case, we already know a lot about human behavior, we have built up very strong priors that constrain the hypothesis space pretty hard. The hotter-chili hypothesis I came up with seems plausible, there are others, but the space of them is rather tightly constrained. So we can do forward modelling fairly well. Whereas in the Doomsday Argument case, or my artificial analogy to it involving 10 minute lifespans and something very weird happening, our current sample size for “How many sapient species survive their technological adolescence?” or “What happens later in the day in cities of sapient mayflies?” is zero. In dynamical systems terms, the rest of the day is a lot more Lyapunov times away in this case. From our point of view, a technological adolescence looks like a dangerous process, but making predictions is hard, especially about the future of a very complex very non-linear system with 8.3 billion humans and an exponentially rising amount of AI in it. The computational load of doing accurate modelling is simply impractical, so our future even 5–10 years out looks like a Singularity to our current computational abilities. So the constraints on our hypothesis distribution are weak, and we end up relying mostly on our arbitrary choice of initial priors. We’re still at the “I really just don’t know” point in the Bayesian process on this one. That’s why people’s P(DOOM)s vary so much — nobody actually knows, they just have different initial default priors, basically depending on temperament. Our future is still a Rorschach inkblot. Which is not a comfortable time to be living in.
Hmmm… Good question. Let’s do the Bayesian thing.
I think it’s because of our priors. In the normal city case, we already know a lot about human behavior, we have built up very strong priors that constrain the hypothesis space pretty hard. The hotter-chili hypothesis I came up with seems plausible, there are others, but the space of them is rather tightly constrained. So we can do forward modelling fairly well. Whereas in the Doomsday Argument case, or my artificial analogy to it involving 10 minute lifespans and something very weird happening, our current sample size for “How many sapient species survive their technological adolescence?” or “What happens later in the day in cities of sapient mayflies?” is zero. In dynamical systems terms, the rest of the day is a lot more Lyapunov times away in this case. From our point of view, a technological adolescence looks like a dangerous process, but making predictions is hard, especially about the future of a very complex very non-linear system with 8.3 billion humans and an exponentially rising amount of AI in it. The computational load of doing accurate modelling is simply impractical, so our future even 5–10 years out looks like a Singularity to our current computational abilities. So the constraints on our hypothesis distribution are weak, and we end up relying mostly on our arbitrary choice of initial priors. We’re still at the “I really just don’t know” point in the Bayesian process on this one. That’s why people’s P(DOOM)s vary so much — nobody actually knows, they just have different initial default priors, basically depending on temperament. Our future is still a Rorschach inkblot. Which is not a comfortable time to be living in.