The Doomsday argument is utter BS because one cannot reliably evaluate probabilities without fixing a probability distribution first. Without knowing more than just the number of humans existing so far, the argument devolves into arguing which probability distribution to pick out of uncountable number of possibilities. An honest attempt to address this question would start with modeling human population fluctuations including various extinction events. In such a model there are multiple free parameters, such as rate of growth, distribution of odds of various extinction-level events, distribution of odds of surviving each type of events, event clustering and so on. The the minimum number of humans does not constrain the models in any interesting way, i.e. to privilege a certain class of models over others, or a certain set of free parameters over others to the degree where we could evaluate a model-independent upper bound for the total number of humans with any degree of confidence.
If you want to productively talk about Doomsday, you have to get your hands dirty and deal with specific x-risks and their effects, not armchair-theorize based on a single number and a few so-called selection/indication principles that have nothing to do with the actual human population dynamics.
The DA, in it’s SSA form (where it is rigorous) comes as a posterior adjustment to all probabilities computed in the way above—it’s not an argument that doom is likely, just that doom is more likely than objective odds would imply, in a precise way that depends on future (and past) population size.
However my post shows that the SSA form does not apply to the question that people generally ask, so the DA is wrong.
The Doomsday argument is utter BS because one cannot reliably evaluate probabilities without fixing a probability distribution first. Without knowing more than just the number of humans existing so far, the argument devolves into arguing which probability distribution to pick out of uncountable number of possibilities. An honest attempt to address this question would start with modeling human population fluctuations including various extinction events. In such a model there are multiple free parameters, such as rate of growth, distribution of odds of various extinction-level events, distribution of odds of surviving each type of events, event clustering and so on. The the minimum number of humans does not constrain the models in any interesting way, i.e. to privilege a certain class of models over others, or a certain set of free parameters over others to the degree where we could evaluate a model-independent upper bound for the total number of humans with any degree of confidence.
If you want to productively talk about Doomsday, you have to get your hands dirty and deal with specific x-risks and their effects, not armchair-theorize based on a single number and a few so-called selection/indication principles that have nothing to do with the actual human population dynamics.
The DA, in it’s SSA form (where it is rigorous) comes as a posterior adjustment to all probabilities computed in the way above—it’s not an argument that doom is likely, just that doom is more likely than objective odds would imply, in a precise way that depends on future (and past) population size.
However my post shows that the SSA form does not apply to the question that people generally ask, so the DA is wrong.