As a fellow Unionist, I would add that this leaves out another important Unionist/successionist argument, namely that if x-risk is really a big problem, then developing powerful AI is likely the best method of reducing the risk of the extinction of all intelligence (biological or not) from the solar system.
The premises of this argument are pretty simple. Namely:
If there are many effective “recipes for ruin” to use Nielsen’s phrase, humans will find them before too long with or without powerful AI. So if you believe there is a large x-risk arising from recipes for ruin, you should believe this risk is still large even if powerful AI is never developed. Maybe it takes a little longer to manifest without AI helping find those recipes, but it’s unlikely to take, say, centuries longer.
And an AI much more powerful than (baseline un augmented biological) humans is likely to be much more capable of at least defending itself against extinction than we are or are likely to become. It may or may not want to defend us, it may or may not want to kill us all, but it will likely both want to and be able to be good at preserving itself.
So if x-risk is real and large, then the choice between developing powerful AI and stopping that development is a choice between a future where at least AI survives, and maybe as a bonus it is nice enough to preserve us too, and a future where we kill ourselves off anyway without AI “help” and leave nothing intelligent orbiting the Sun. The claimed possible future where humanity preserves a worthwhile future existence unaided is much lower probability than either of these even if AI development is stoppable.
Fwiw I do not work in AI and so do not have the memetic temptations the OP theorizes as a driver of successionist views.
Agree, and I’d love to see the Separatist counterargument to this. Maybe it takes the shape of “humans are resilient and can figure out the solutions to their own problems” but to me this feels too small-minded… we know during the Cold War for example that it’s basically just dumb luck that avoided catastrophe.
As a fellow Unionist, I would add that this leaves out another important Unionist/successionist argument, namely that if x-risk is really a big problem, then developing powerful AI is likely the best method of reducing the risk of the extinction of all intelligence (biological or not) from the solar system.
The premises of this argument are pretty simple. Namely:
If there are many effective “recipes for ruin” to use Nielsen’s phrase, humans will find them before too long with or without powerful AI. So if you believe there is a large x-risk arising from recipes for ruin, you should believe this risk is still large even if powerful AI is never developed. Maybe it takes a little longer to manifest without AI helping find those recipes, but it’s unlikely to take, say, centuries longer.
And an AI much more powerful than (baseline un augmented biological) humans is likely to be much more capable of at least defending itself against extinction than we are or are likely to become. It may or may not want to defend us, it may or may not want to kill us all, but it will likely both want to and be able to be good at preserving itself.
So if x-risk is real and large, then the choice between developing powerful AI and stopping that development is a choice between a future where at least AI survives, and maybe as a bonus it is nice enough to preserve us too, and a future where we kill ourselves off anyway without AI “help” and leave nothing intelligent orbiting the Sun. The claimed possible future where humanity preserves a worthwhile future existence unaided is much lower probability than either of these even if AI development is stoppable.
Fwiw I do not work in AI and so do not have the memetic temptations the OP theorizes as a driver of successionist views.
Agree, and I’d love to see the Separatist counterargument to this. Maybe it takes the shape of “humans are resilient and can figure out the solutions to their own problems” but to me this feels too small-minded… we know during the Cold War for example that it’s basically just dumb luck that avoided catastrophe.