The more AI is seen as an existential or strategic threat, the more likely a sudden breakthrough could spark a crisis akin to the Cuban missile crisis. For example, imagine a Chinese lab reaching recursive self-improvement with progress accelerating to possibly strategically decisive levels in the span of hours or days. Posing a threat of irreversible strategic loss for other stakeholders, such a crisis could escalate to military and even nuclear conflict, and would likely be more difficult to de-escalate than previous crises have been; it’s much harder to verifiably stop such AI development than it is to remove some missiles from Cuba.
Based on public information, preparedness for this kind of crisis appears minimal and poorly coordinated, and if classified plans exist, they are not serving an external signaling or deterrence function:
None of the AI safety institutes have published anything about crisis preparedness as far as I can find.
In a fast-paced crisis, rapid and reliable communications between stakeholders have been historically crucial; but these only exist for limited scopes and stakeholders. A multistakeholder crisis communications channel (CATALINK) is only in development.
No individual nation seems to have pre-existing consensus on how to react to an adversary’s AI breakthrough.
The UN Operations and Crisis Centre has not published any plans, while the Security Council would likely be paralyzed by P5 veto dynamics. A general assembly could be called in 24 hours after other attempts fail (and they could vote to morally condemn a stakeholder..).
NATO has Non-Article 5 crisis response operations, but these do not seem suitable or prepared for ASI-style breakthroughs. Lack of pre-existing national consensus would likely significantly slow needed consensus for NATO reactions.
While some plans for handling sudden breakthroughs exist in the AI safety community, these are not broadly coordinated and to my knowledge do not account for managing national security tensions that are likely. There is no pre-existing consensus on what the AI safety community should do in a crisis.
In practice, a sudden breakthrough would result in a massive scramble to negotiate internally and externally on how to react. Without pre-negotiated protocols for pausing frontier development, the only credible responses available to other actors may be coercive ones.
The more AI is seen as an existential or strategic threat, the more likely a sudden breakthrough could spark a crisis akin to the Cuban missile crisis. For example, imagine a Chinese lab reaching recursive self-improvement with progress accelerating to possibly strategically decisive levels in the span of hours or days. Posing a threat of irreversible strategic loss for other stakeholders, such a crisis could escalate to military and even nuclear conflict, and would likely be more difficult to de-escalate than previous crises have been; it’s much harder to verifiably stop such AI development than it is to remove some missiles from Cuba.
Based on public information, preparedness for this kind of crisis appears minimal and poorly coordinated, and if classified plans exist, they are not serving an external signaling or deterrence function:
None of the AI safety institutes have published anything about crisis preparedness as far as I can find.
In a fast-paced crisis, rapid and reliable communications between stakeholders have been historically crucial; but these only exist for limited scopes and stakeholders. A multistakeholder crisis communications channel (CATALINK) is only in development.
No individual nation seems to have pre-existing consensus on how to react to an adversary’s AI breakthrough.
The UN Operations and Crisis Centre has not published any plans, while the Security Council would likely be paralyzed by P5 veto dynamics. A general assembly could be called in 24 hours after other attempts fail (and they could vote to morally condemn a stakeholder..).
NATO has Non-Article 5 crisis response operations, but these do not seem suitable or prepared for ASI-style breakthroughs. Lack of pre-existing national consensus would likely significantly slow needed consensus for NATO reactions.
While some plans for handling sudden breakthroughs exist in the AI safety community, these are not broadly coordinated and to my knowledge do not account for managing national security tensions that are likely. There is no pre-existing consensus on what the AI safety community should do in a crisis.
In practice, a sudden breakthrough would result in a massive scramble to negotiate internally and externally on how to react. Without pre-negotiated protocols for pausing frontier development, the only credible responses available to other actors may be coercive ones.