The whole doomsday argument seems to me to be based on a vaguely frequentist approach, where you can define anything as the class of observations. You raise a great point here, changing the reference class from “people” to “experiences.” The fact that the predicted end of the world date varies according to the reference class chosen sounds a lot like accusations of subjectivity in frequentism.
Actually, I think it is more like the charge of subjectivism in Bayesian theory, and for similar reasons.
If we take a Bayesian model, then we have to consider a wide range of hypotheses for what our reference class could be (all observers, all observations, all human observers, all human observations, all observations by observers aware of Bayes’s theorem, all observers aware of the DA). We should then apply a prior probability to each reference class (on general grounds of simplicity, economy, overall reasonableness or whatever) as well a a prior probability to each hypothesis about population models (how observer numbers change over time; total number of observers or total integrated observer experiences over time). Then we churn through Bayes’ theorem using our actual observations, and see what drops out.
My point in the post is that a pretty simple reference class (all observations) combined with a pretty simple population model (exponential growth → short peak → collapse) seems to do the job. It predicts what we observe now very well.
We should then apply a prior probability to each reference class (on general grounds of simplicity, economy, overall reasonableness or whatever) as well a a prior probability to each hypothesis
What is applying a prior probability to a reference class? As opposed to applying a prior probability to a hypothesis?
The hypothesis is “this reference class is right for my observations” (in the sense that the observations are typical for that class). There might be several different reference classes which are “right”, so such hypotheses are non-exclusive.
That’s an interesting point.
The whole doomsday argument seems to me to be based on a vaguely frequentist approach, where you can define anything as the class of observations. You raise a great point here, changing the reference class from “people” to “experiences.” The fact that the predicted end of the world date varies according to the reference class chosen sounds a lot like accusations of subjectivity in frequentism.
Actually, I think it is more like the charge of subjectivism in Bayesian theory, and for similar reasons.
If we take a Bayesian model, then we have to consider a wide range of hypotheses for what our reference class could be (all observers, all observations, all human observers, all human observations, all observations by observers aware of Bayes’s theorem, all observers aware of the DA). We should then apply a prior probability to each reference class (on general grounds of simplicity, economy, overall reasonableness or whatever) as well a a prior probability to each hypothesis about population models (how observer numbers change over time; total number of observers or total integrated observer experiences over time). Then we churn through Bayes’ theorem using our actual observations, and see what drops out.
My point in the post is that a pretty simple reference class (all observations) combined with a pretty simple population model (exponential growth → short peak → collapse) seems to do the job. It predicts what we observe now very well.
What is applying a prior probability to a reference class? As opposed to applying a prior probability to a hypothesis?
The hypothesis is “this reference class is right for my observations” (in the sense that the observations are typical for that class). There might be several different reference classes which are “right”, so such hypotheses are non-exclusive.
I suspect they all are, weighted by “general grounds of simplicity, economy, overall reasonableness or whatever)” i.e. Kolmogorov complexity.
Therefore, asking if “wildlife” or “humans” or whichever simple reference class is the right one is a wrong approach.