I think your post is very good at laying out heuristics at play. At the same time, it’s clear that you’re biased towards the Separatist position. I believe that when we follow the logic all the way down, the Unionist vs. Separatist framing taps into deep philosophical topics that are hard to settle one way or the other.
To respond to your memes as a Unionist:
Maybe some future version of humanity will want to do some handover, but we are very far from the limits of human potential. As individual biological humans we can be much smarter and wiser than we are now, and the best option is to delegate to smart and wise humans.
I would like this but I think it is unrealistic. The pace of human biological progress vs. the pace of AI progress is orders of magnitude slower.
We are even further from the limits of how smart and wise humanity can be collectively, so we should mostly improve that first. If the maxed-out competent version humanity decides to hand over after some reflection, it’s a very different version from “handover to moloch.”
I also would like this but I think it is unrealistic. The UN was founded in 1945, the world still has a lot of conflict. What has happened to technology in that time period?
Often, successionist arguments have the motte-and-bailey form. The motte is “some form of succession in future may happen and even be desirable”. The bailey is “forms of succession likely to happen if we don’t prevent them are good”
I’m reading this as making a claim about the value of non-forcing action. Daoists would say that indeed a non-forcing mindset is more enlightened than living a deep struggle.
Beware confusion between progress on persuasion and progress on moral philosophy. You probably wouldn’t want ChatGPT 4o running the future. Yet empirically, some ChatGPT 4o personas already persuade humans to give them resources, form emotional dependencies, and advocate for AI rights. If these systems can already hijack human psychology effectively without necessarily making much progress on philosophy, imagine what actually capable systems will be able to do. If you consider the people falling for 4o fools, it’s important to track this is the worst level of manipulation abilities you’ll ever see—it will only get smarter from here.
I think this argument is logically flawed — you suggest that misalignment of current less capable models implies that more capable models will amplify misalignment. My position is that yes this can happen, but — engineered in the correct way by humans — more capable models will solve misalignment.
Claims to understand ‘the arc of history’ should trigger immediate skepticism—every genocidal ideology has made the same claim.
Agree that this contains risks. However, you are using the same memetic weapon by claiming to understand successionist arguments.
Agree, and so the question in my view is how to achieved a balanced union.
Given our incomplete understanding of consciousness, meaning, and value, replacing humanity involves potentially destroying things we don’t understand yet, and possibly irreversibly sacrificing all value.
Agree that we should not replace humanity, I hope that it is preserved.
Basic legitimacy: Most humans want their children to inherit the future. Successionism denies this. The main paths to implementation are force or trickery, neither of which makes it right
This claim is too strong, as I believe AI successionism can still preserve humanity.
We are not in a good position to make such a decision: Current humans have no moral right to make extinction-level decisions for all future potential humans and against what our ancestors would want. Countless generations struggled, suffered, and sacrificed to get us here, going extinct betrays that entire chain of sacrifice and hope.
In an ideal world I think we maybe should pause all AI development until we’ve figured this all out (the downside risk is that the longer we do this, the longer we leave ourselves open to other existential risks e.g nuclear war), my position is that “the cat is already out of the bag” and so what we have to do is shape our inevitable status as “less capable than powerful AI” in the best possible way.
As a fellow Unionist, I would add that this leaves out another important Unionist/successionist argument, namely that if x-risk is really a big problem, then developing powerful AI is likely the best method of reducing the risk of the extinction of all intelligence (biological or not) from the solar system.
The premises of this argument are pretty simple. Namely:
If there are many effective “recipes for ruin” to use Nielsen’s phrase, humans will find them before too long with or without powerful AI. So if you believe there is a large x-risk arising from recipes for ruin, you should believe this risk is still large even if powerful AI is never developed. Maybe it takes a little longer to manifest without AI helping find those recipes, but it’s unlikely to take, say, centuries longer.
And an AI much more powerful than (baseline un augmented biological) humans is likely to be much more capable of at least defending itself against extinction than we are or are likely to become. It may or may not want to defend us, it may or may not want to kill us all, but it will likely both want to and be able to be good at preserving itself.
So if x-risk is real and large, then the choice between developing powerful AI and stopping that development is a choice between a future where at least AI survives, and maybe as a bonus it is nice enough to preserve us too, and a future where we kill ourselves off anyway without AI “help” and leave nothing intelligent orbiting the Sun. The claimed possible future where humanity preserves a worthwhile future existence unaided is much lower probability than either of these even if AI development is stoppable.
Fwiw I do not work in AI and so do not have the memetic temptations the OP theorizes as a driver of successionist views.
Agree, and I’d love to see the Separatist counterargument to this. Maybe it takes the shape of “humans are resilient and can figure out the solutions to their own problems” but to me this feels too small-minded… we know during the Cold War for example that it’s basically just dumb luck that avoided catastrophe.
You might be interested in Unionists vs. Separatists.
I think your post is very good at laying out heuristics at play. At the same time, it’s clear that you’re biased towards the Separatist position. I believe that when we follow the logic all the way down, the Unionist vs. Separatist framing taps into deep philosophical topics that are hard to settle one way or the other.
To respond to your memes as a Unionist:
I would like this but I think it is unrealistic. The pace of human biological progress vs. the pace of AI progress is orders of magnitude slower.
I also would like this but I think it is unrealistic. The UN was founded in 1945, the world still has a lot of conflict. What has happened to technology in that time period?
I’m reading this as making a claim about the value of non-forcing action. Daoists would say that indeed a non-forcing mindset is more enlightened than living a deep struggle.
I think this argument is logically flawed — you suggest that misalignment of current less capable models implies that more capable models will amplify misalignment. My position is that yes this can happen, but — engineered in the correct way by humans — more capable models will solve misalignment.
Agree that this contains risks. However, you are using the same memetic weapon by claiming to understand successionist arguments.
Agree, and so the question in my view is how to achieved a balanced union.
Agree that we should not replace humanity, I hope that it is preserved.
This claim is too strong, as I believe AI successionism can still preserve humanity.
In an ideal world I think we maybe should pause all AI development until we’ve figured this all out (the downside risk is that the longer we do this, the longer we leave ourselves open to other existential risks e.g nuclear war), my position is that “the cat is already out of the bag” and so what we have to do is shape our inevitable status as “less capable than powerful AI” in the best possible way.
As a fellow Unionist, I would add that this leaves out another important Unionist/successionist argument, namely that if x-risk is really a big problem, then developing powerful AI is likely the best method of reducing the risk of the extinction of all intelligence (biological or not) from the solar system.
The premises of this argument are pretty simple. Namely:
If there are many effective “recipes for ruin” to use Nielsen’s phrase, humans will find them before too long with or without powerful AI. So if you believe there is a large x-risk arising from recipes for ruin, you should believe this risk is still large even if powerful AI is never developed. Maybe it takes a little longer to manifest without AI helping find those recipes, but it’s unlikely to take, say, centuries longer.
And an AI much more powerful than (baseline un augmented biological) humans is likely to be much more capable of at least defending itself against extinction than we are or are likely to become. It may or may not want to defend us, it may or may not want to kill us all, but it will likely both want to and be able to be good at preserving itself.
So if x-risk is real and large, then the choice between developing powerful AI and stopping that development is a choice between a future where at least AI survives, and maybe as a bonus it is nice enough to preserve us too, and a future where we kill ourselves off anyway without AI “help” and leave nothing intelligent orbiting the Sun. The claimed possible future where humanity preserves a worthwhile future existence unaided is much lower probability than either of these even if AI development is stoppable.
Fwiw I do not work in AI and so do not have the memetic temptations the OP theorizes as a driver of successionist views.
Agree, and I’d love to see the Separatist counterargument to this. Maybe it takes the shape of “humans are resilient and can figure out the solutions to their own problems” but to me this feels too small-minded… we know during the Cold War for example that it’s basically just dumb luck that avoided catastrophe.