1 We will die off very soon, in next 10 years perhaps. It is possible because of «success« in bioengineering and AI.
2 In next 10 years DA will be rebutted in very spectacular and obvious way. Everyone since will know this rebuttal.
3 DA is wrong.
My opinion is that very soon dieoff is inevitable and only something really crazy could save as. It could be quantum immortality, or AI crash project, or extraterrestrial intelligence or owners of our simulation.
I suppose a “really fast, really soon” decline is possible … something so quick that essentially no-one notices, and hence there isn’t a lot of discussion about why DA seems to have been right when the decline happens.
However, one problem is making this model generic across multiple civilisations of observers (not just humans). Is it really plausible that essentially every civilisation that arises crashes almost immediately after someone first postulates the DA (so the total class of DA-aware observers is really tiny in every civilisation)? If some civilisations are more drawn-out than others, and have a huge number of observers thinking about DA before collapse, then we are—again—atypical members of the DA class.
It is really interesting point—to see all DA aware observers in all civilizations.
So maybe technologies are the main reason why all civilizations crash. And understanding of DA typically appear tougher with science. So this explain why understanding of DA is coincedent with global catastrophes.
But more strong could be idea that understanding of DA has casual relation with catastrophes. Something like strong anthropic principle. Now I think that it is good idea for science fiction, because it is not clear how DA understanding could destroy the world, but may be it worth more thinking about it.
So there are 3 possibility:
1 We will die off very soon, in next 10 years perhaps. It is possible because of «success« in bioengineering and AI.
2 In next 10 years DA will be rebutted in very spectacular and obvious way. Everyone since will know this rebuttal.
3 DA is wrong.
My opinion is that very soon dieoff is inevitable and only something really crazy could save as. It could be quantum immortality, or AI crash project, or extraterrestrial intelligence or owners of our simulation.
I suppose a “really fast, really soon” decline is possible … something so quick that essentially no-one notices, and hence there isn’t a lot of discussion about why DA seems to have been right when the decline happens.
However, one problem is making this model generic across multiple civilisations of observers (not just humans). Is it really plausible that essentially every civilisation that arises crashes almost immediately after someone first postulates the DA (so the total class of DA-aware observers is really tiny in every civilisation)? If some civilisations are more drawn-out than others, and have a huge number of observers thinking about DA before collapse, then we are—again—atypical members of the DA class.
It is really interesting point—to see all DA aware observers in all civilizations. So maybe technologies are the main reason why all civilizations crash. And understanding of DA typically appear tougher with science. So this explain why understanding of DA is coincedent with global catastrophes.
But more strong could be idea that understanding of DA has casual relation with catastrophes. Something like strong anthropic principle. Now I think that it is good idea for science fiction, because it is not clear how DA understanding could destroy the world, but may be it worth more thinking about it.