Another issue… Yes restricting the reference class to people who are discussing the DA is possible, which would imply that humans will stop discussing the DA soon… not necessarily that we will die out soon. This is one of the ways of escaping from a doomish conclusion.
Except when you then think “Hmm, but then why does the DA class disappear soon?” If the human race survives for even a medium-long period, then people will return to it from time to time over the next centuries/millennia (e.g. it could be part of a background course on how not to apply Bayes’s theorem) in which case we look like atypically early members of the DA class right now.. Or even if humanity struggles on a few more decades, then collapses this century, we look like atypically early members of the DA class right now (I’d expect a lot of attention to the DA when it becomes clear to the world that we’re about to collapse).
Finally, the DA reference class is more complicated than the wider reference class of all observations, since there is more built into its definition. Since, it is more complex and has less predictive power (it doesn’t predict we’d be this early in the class) it looks like the incorrect reference class for us to use right now.
1 We will die off very soon, in next 10 years perhaps. It is possible because of «success« in bioengineering and AI.
2 In next 10 years DA will be rebutted in very spectacular and obvious way. Everyone since will know this rebuttal.
3 DA is wrong.
My opinion is that very soon dieoff is inevitable and only something really crazy could save as. It could be quantum immortality, or AI crash project, or extraterrestrial intelligence or owners of our simulation.
I suppose a “really fast, really soon” decline is possible … something so quick that essentially no-one notices, and hence there isn’t a lot of discussion about why DA seems to have been right when the decline happens.
However, one problem is making this model generic across multiple civilisations of observers (not just humans). Is it really plausible that essentially every civilisation that arises crashes almost immediately after someone first postulates the DA (so the total class of DA-aware observers is really tiny in every civilisation)? If some civilisations are more drawn-out than others, and have a huge number of observers thinking about DA before collapse, then we are—again—atypical members of the DA class.
It is really interesting point—to see all DA aware observers in all civilizations.
So maybe technologies are the main reason why all civilizations crash. And understanding of DA typically appear tougher with science. So this explain why understanding of DA is coincedent with global catastrophes.
But more strong could be idea that understanding of DA has casual relation with catastrophes. Something like strong anthropic principle. Now I think that it is good idea for science fiction, because it is not clear how DA understanding could destroy the world, but may be it worth more thinking about it.
Another issue… Yes restricting the reference class to people who are discussing the DA is possible, which would imply that humans will stop discussing the DA soon… not necessarily that we will die out soon. This is one of the ways of escaping from a doomish conclusion.
Except when you then think “Hmm, but then why does the DA class disappear soon?” If the human race survives for even a medium-long period, then people will return to it from time to time over the next centuries/millennia (e.g. it could be part of a background course on how not to apply Bayes’s theorem) in which case we look like atypically early members of the DA class right now.. Or even if humanity struggles on a few more decades, then collapses this century, we look like atypically early members of the DA class right now (I’d expect a lot of attention to the DA when it becomes clear to the world that we’re about to collapse).
Finally, the DA reference class is more complicated than the wider reference class of all observations, since there is more built into its definition. Since, it is more complex and has less predictive power (it doesn’t predict we’d be this early in the class) it looks like the incorrect reference class for us to use right now.
So there are 3 possibility:
1 We will die off very soon, in next 10 years perhaps. It is possible because of «success« in bioengineering and AI.
2 In next 10 years DA will be rebutted in very spectacular and obvious way. Everyone since will know this rebuttal.
3 DA is wrong.
My opinion is that very soon dieoff is inevitable and only something really crazy could save as. It could be quantum immortality, or AI crash project, or extraterrestrial intelligence or owners of our simulation.
I suppose a “really fast, really soon” decline is possible … something so quick that essentially no-one notices, and hence there isn’t a lot of discussion about why DA seems to have been right when the decline happens.
However, one problem is making this model generic across multiple civilisations of observers (not just humans). Is it really plausible that essentially every civilisation that arises crashes almost immediately after someone first postulates the DA (so the total class of DA-aware observers is really tiny in every civilisation)? If some civilisations are more drawn-out than others, and have a huge number of observers thinking about DA before collapse, then we are—again—atypical members of the DA class.
It is really interesting point—to see all DA aware observers in all civilizations. So maybe technologies are the main reason why all civilizations crash. And understanding of DA typically appear tougher with science. So this explain why understanding of DA is coincedent with global catastrophes.
But more strong could be idea that understanding of DA has casual relation with catastrophes. Something like strong anthropic principle. Now I think that it is good idea for science fiction, because it is not clear how DA understanding could destroy the world, but may be it worth more thinking about it.