If one dives into the essay itself, one sees some auxiliary PNRs which the author thinks have already occurred.
For instance, in the past, it would have been conceivable for a single country of the G20 to unilaterally make it their priority to ban the development of ASI and its precursors.
In the past, it would have been conceivable for any country in the West to decide to fight off Big Tech and lead the collective fight.
I think both are still quite conceivable (both in a democracy and in a non-democratic state, with an additional remark that there is no guarantee that all countries in the West stay democratic). So here I disagree with the author.
But the Soft PNR is tricky. In the essay, the author defines the Soft PNR differently than in this post:
The Soft PNR is when AI systems are so powerful that, although they “can” theoretically be turned off, there is not enough geopolitical will left to do so.
And geopolitical will is something which can fluctuate. Now it’s absent, currently there is no geopolitical will to do so, but in the future it might emerge (then it might disappear again, and so on).
When something can fluctuate in this fashion, in what sense can one talk about a “point of no return”?
But as an initial approximation, this framing still might be useful (before one starts diving into the details of how inevitable disagreements over this matter between countries and between political forces might be resolved; when one dives into those disagreements one discovers different “points of no return”, e.g. how likely it is that the coalition of pro-AI people and AIs is effectively undefeatable).
For instance, in the past, it would have been conceivable for a single country of the G20 to unilaterally make it their priority to ban the development of ASI and its precursors.
In the past, it would have been conceivable for any country in the West to decide to fight off Big Tech and lead the collective fight.
I think unilateralism + leadership is quite unconceivable right now.
I am interested in any scenario you have in mind (not with the intent to fight whatever you suggest, just to see if there are ideas or mechanisms I may be missing).
And geopolitical will is something which can fluctuate. Now it’s absent, currently there is no geopolitical will to do so, but in the future it might emerge (then it might disappear again, and so on).
This is a failure of my writing: I should have made it clear that it’s a PNR when there’s no going back.
My point of “when there’s not enough geopolitical will left” was that nope, we can reach a point where there’s just not enough left. Not only “right now no body wants to regulate AI” but “right now, everything is so captive that there’s not really any independent will to govern left anymore”.
I think unilateralism + leadership is quite unconceivable right now.
I am interested in any scenario you have in mind (not with the intent to fight whatever you suggest, just to see if there are ideas or mechanisms I may be missing).
I think, with G20, it’s very easy to imagine. Here is one such scenario.
Xi Jinping decides (for whatever reason) that the ASI needs to be stopped. He orders a secret study, and if the study indicates that there are feasible pathways, he orders to proceed along some of them (perhaps, in parallel).
For example, he might demand international negotiations and threaten a nuclear war, and he is capable to make China to line up behind him in support of this policy.
On the other hand, if that study suggests a realistic path to a unilateral pivotal act, he might also order a secret project towards performing that pivotal act.
With a democracy, it’s more tricky, especially given that democratic institutions are in bad shape right now.
But if the labor market is a disaster due to AI, and the state is not stepping in adequately to make people whole in the material sense, I can imagine anti-AI forces taking power via democratic means (the main objection is timelines, 4 years is like infinity these days). The incumbent politicians might also start changing their positions on this, if things are bad and there is enough pressure.
A more exotic scenario is an AI executive figuring out how to take over a nuclear-weapons-armed country while being armed only with a sub-AGI specialized system him/herself, and then deciding to impose a freeze on AI development. “A sub-AGI-powered human-led coup, followed by a freeze”. The country in question might support this, depending on the situation.
Another exotic scenario is a group of military officers performing a coup, and their platform might include “stop AI” as one of the clauses. The country will consist of people who support them and people who are mostly silent due to fear.
I think it’s not difficult to generate scenarios. None of these scenarios is very pleasant, there is that, unfortunately… (And there is no guarantee that any such scenario will actually succeed at stopping the ASI. That’s the problem with all these bans on AI, and scary state forces, and nuclear threats. It’s not clear if they end up actually preventing the development of an ASI by a small actor, there are too many unknowns.)
It’s an interesting question.
If one dives into the essay itself, one sees some auxiliary PNRs which the author thinks have already occurred.
I think both are still quite conceivable (both in a democracy and in a non-democratic state, with an additional remark that there is no guarantee that all countries in the West stay democratic). So here I disagree with the author.
But the Soft PNR is tricky. In the essay, the author defines the Soft PNR differently than in this post:
And geopolitical will is something which can fluctuate. Now it’s absent, currently there is no geopolitical will to do so, but in the future it might emerge (then it might disappear again, and so on).
When something can fluctuate in this fashion, in what sense can one talk about a “point of no return”?
But as an initial approximation, this framing still might be useful (before one starts diving into the details of how inevitable disagreements over this matter between countries and between political forces might be resolved; when one dives into those disagreements one discovers different “points of no return”, e.g. how likely it is that the coalition of pro-AI people and AIs is effectively undefeatable).
I think unilateralism + leadership is quite unconceivable right now.
I am interested in any scenario you have in mind (not with the intent to fight whatever you suggest, just to see if there are ideas or mechanisms I may be missing).
This is a failure of my writing: I should have made it clear that it’s a PNR when there’s no going back.
My point of “when there’s not enough geopolitical will left” was that nope, we can reach a point where there’s just not enough left. Not only “right now no body wants to regulate AI” but “right now, everything is so captive that there’s not really any independent will to govern left anymore”.
I think, with G20, it’s very easy to imagine. Here is one such scenario.
Xi Jinping decides (for whatever reason) that the ASI needs to be stopped. He orders a secret study, and if the study indicates that there are feasible pathways, he orders to proceed along some of them (perhaps, in parallel).
For example, he might demand international negotiations and threaten a nuclear war, and he is capable to make China to line up behind him in support of this policy.
On the other hand, if that study suggests a realistic path to a unilateral pivotal act, he might also order a secret project towards performing that pivotal act.
With a democracy, it’s more tricky, especially given that democratic institutions are in bad shape right now.
But if the labor market is a disaster due to AI, and the state is not stepping in adequately to make people whole in the material sense, I can imagine anti-AI forces taking power via democratic means (the main objection is timelines, 4 years is like infinity these days). The incumbent politicians might also start changing their positions on this, if things are bad and there is enough pressure.
A more exotic scenario is an AI executive figuring out how to take over a nuclear-weapons-armed country while being armed only with a sub-AGI specialized system him/herself, and then deciding to impose a freeze on AI development. “A sub-AGI-powered human-led coup, followed by a freeze”. The country in question might support this, depending on the situation.
Another exotic scenario is a group of military officers performing a coup, and their platform might include “stop AI” as one of the clauses. The country will consist of people who support them and people who are mostly silent due to fear.
I think it’s not difficult to generate scenarios. None of these scenarios is very pleasant, there is that, unfortunately… (And there is no guarantee that any such scenario will actually succeed at stopping the ASI. That’s the problem with all these bans on AI, and scary state forces, and nuclear threats. It’s not clear if they end up actually preventing the development of an ASI by a small actor, there are too many unknowns.)
Thanks for responding.