Superintelligence that both lets humans survive (or revives cryonauts) and doesn’t enable indefinite lifespans is a very contrived package. Grading “doom” on concerns centrally about the first decades to centuries of post-AGI future (value/culture drift, successors, the next few generations of humanity) is not taking into account that the next billions+ years is also what could happen to you or people you know personally, if there is a future for originally-humans at all.
(This is analogous to the “missing mood” of not taking superintelligence into account when talking about future concerns of say 2040-2100 as if superintelligence isn’t imminent. In this case, the thing not taken into account is indefinite personal lifespans of people alive today, rather than the overall scope of imminent disruption of human condition.)
Superintelligence that both lets humans survive (or revives cryonauts) and doesn’t enable indefinite lifespans is a very contrived package.
I don’t disagree, but I think we might not agree on the reason. Superintelligence that lets humanity survive (with enough power/value to last for more than a few thousand years, whether or not individuals extend beyond 150 or so years) is pretty contrived.
There’s just no reason to keep significant amounts of biological sub-intelligence around.
Superintelligence that both lets humans survive (or revives cryonauts) and doesn’t enable indefinite lifespans is a very contrived package. Grading “doom” on concerns centrally about the first decades to centuries of post-AGI future (value/culture drift, successors, the next few generations of humanity) is not taking into account that the next billions+ years is also what could happen to you or people you know personally, if there is a future for originally-humans at all.
(This is analogous to the “missing mood” of not taking superintelligence into account when talking about future concerns of say 2040-2100 as if superintelligence isn’t imminent. In this case, the thing not taken into account is indefinite personal lifespans of people alive today, rather than the overall scope of imminent disruption of human condition.)
I don’t disagree, but I think we might not agree on the reason. Superintelligence that lets humanity survive (with enough power/value to last for more than a few thousand years, whether or not individuals extend beyond 150 or so years) is pretty contrived.
There’s just no reason to keep significant amounts of biological sub-intelligence around.