Lots of people would jump at the chance to gamble the entire future against their own immortality on those odds.
This would assume that those people are also convinced that something like radical life extension is possible in principle, and that more advanced AI would be required for delivering it.
I have no idea for how many people that is true. Many people dismiss suggestions of radical life extension with the same reflexive “that’s sci-fi” reflex that AI x-risk scenarios get. Even if they became convinced of AI, life extension might stay in that category.
And if they did get convinced of its possibility, the most likely scenario I could see would be if advances in more narrow AI had already delivered proofs of concept. You could imagine it being solved by just something like extensive biological modeling tools that were more developed than what we have today, but did not yet cross the threshold to transformative AI.
It seems to me that believing ASI can kill you and believing ASI can save you are both pretty directly downstream of believing in ASI at all. Since the premise is that everyone believes pretty strongly in the possibility of doom, it seems they’d mostly get there by believing in ASI and would mostly also believe in the upside potentials too.
Yes. But because we’re discussing a scenario in which the world is ready to slow down or shut down AGI research, I’m assuming those steps have been crossed.
The biggest step IMO, “alignment is hard” doesn’t intervene between taking ASI seriously and thinking it could prevent you from dying of natural causes.
This would assume that those people are also convinced that something like radical life extension is possible in principle, and that more advanced AI would be required for delivering it.
I have no idea for how many people that is true. Many people dismiss suggestions of radical life extension with the same reflexive “that’s sci-fi” reflex that AI x-risk scenarios get. Even if they became convinced of AI, life extension might stay in that category.
And if they did get convinced of its possibility, the most likely scenario I could see would be if advances in more narrow AI had already delivered proofs of concept. You could imagine it being solved by just something like extensive biological modeling tools that were more developed than what we have today, but did not yet cross the threshold to transformative AI.
It seems to me that believing ASI can kill you and believing ASI can save you are both pretty directly downstream of believing in ASI at all. Since the premise is that everyone believes pretty strongly in the possibility of doom, it seems they’d mostly get there by believing in ASI and would mostly also believe in the upside potentials too.
There are several intermediate steps in the argument from ASI to doom.
Yes. But because we’re discussing a scenario in which the world is ready to slow down or shut down AGI research, I’m assuming those steps have been crossed.
The biggest step IMO, “alignment is hard” doesn’t intervene between taking ASI seriously and thinking it could prevent you from dying of natural causes.