I think unilateralism + leadership is quite unconceivable right now.
I am interested in any scenario you have in mind (not with the intent to fight whatever you suggest, just to see if there are ideas or mechanisms I may be missing).
I think, with G20, it’s very easy to imagine. Here is one such scenario.
Xi Jinping decides (for whatever reason) that the ASI needs to be stopped. He orders a secret study, and if the study indicates that there are feasible pathways, he orders to proceed along some of them (perhaps, in parallel).
For example, he might demand international negotiations and threaten a nuclear war, and he is capable to make China to line up behind him in support of this policy.
On the other hand, if that study suggests a realistic path to a unilateral pivotal act, he might also order a secret project towards performing that pivotal act.
With a democracy, it’s more tricky, especially given that democratic institutions are in bad shape right now.
But if the labor market is a disaster due to AI, and the state is not stepping in adequately to make people whole in the material sense, I can imagine anti-AI forces taking power via democratic means (the main objection is timelines, 4 years is like infinity these days). The incumbent politicians might also start changing their positions on this, if things are bad and there is enough pressure.
A more exotic scenario is an AI executive figuring out how to take over a nuclear-weapons-armed country while being armed only with a sub-AGI specialized system him/herself, and then deciding to impose a freeze on AI development. “A sub-AGI-powered human-led coup, followed by a freeze”. The country in question might support this, depending on the situation.
Another exotic scenario is a group of military officers performing a coup, and their platform might include “stop AI” as one of the clauses. The country will consist of people who support them and people who are mostly silent due to fear.
I think it’s not difficult to generate scenarios. None of these scenarios is very pleasant, there is that, unfortunately… (And there is no guarantee that any such scenario will actually succeed at stopping the ASI. That’s the problem with all these bans on AI, and scary state forces, and nuclear threats. It’s not clear if they end up actually preventing the development of an ASI by a small actor, there are too many unknowns.)
I think, with G20, it’s very easy to imagine. Here is one such scenario.
Xi Jinping decides (for whatever reason) that the ASI needs to be stopped. He orders a secret study, and if the study indicates that there are feasible pathways, he orders to proceed along some of them (perhaps, in parallel).
For example, he might demand international negotiations and threaten a nuclear war, and he is capable to make China to line up behind him in support of this policy.
On the other hand, if that study suggests a realistic path to a unilateral pivotal act, he might also order a secret project towards performing that pivotal act.
With a democracy, it’s more tricky, especially given that democratic institutions are in bad shape right now.
But if the labor market is a disaster due to AI, and the state is not stepping in adequately to make people whole in the material sense, I can imagine anti-AI forces taking power via democratic means (the main objection is timelines, 4 years is like infinity these days). The incumbent politicians might also start changing their positions on this, if things are bad and there is enough pressure.
A more exotic scenario is an AI executive figuring out how to take over a nuclear-weapons-armed country while being armed only with a sub-AGI specialized system him/herself, and then deciding to impose a freeze on AI development. “A sub-AGI-powered human-led coup, followed by a freeze”. The country in question might support this, depending on the situation.
Another exotic scenario is a group of military officers performing a coup, and their platform might include “stop AI” as one of the clauses. The country will consist of people who support them and people who are mostly silent due to fear.
I think it’s not difficult to generate scenarios. None of these scenarios is very pleasant, there is that, unfortunately… (And there is no guarantee that any such scenario will actually succeed at stopping the ASI. That’s the problem with all these bans on AI, and scary state forces, and nuclear threats. It’s not clear if they end up actually preventing the development of an ASI by a small actor, there are too many unknowns.)
Thanks for responding.