To draw parallels from nuclear weapons: The START I nuclear disarmament treaty between the US and the USSR included some 12 different types of inspection, which included Russian inspectors at US sites and vice versa. We also have the International Atomic Energy Agency that coordinates various safety measures for inhibiting dangerous uses of nuclear technologies. There are many more techniques and agreements that were cooperatively deployed to improve outcomes.
With AI, we already have precursory elements for this in place, with for example the UK AISI evaluating US-developed models for safety. If AI power and danger levels continue to progress, its development will likely become increasingly government-controlled and monitored. The more it is seen as a national security issue, the more pressure there will be for international cooperation from enemies and allies alike.
This cooperation might include reciprocal inspection regimes, joint safety standards, transparency requirements for training runs above certain compute thresholds, and international verification mechanisms. While military conflict remains a theoretical option, the threshold for such action would be extremely high given both nuclear deterrence and the significant diplomatic costs. Instead, we’d likely see a gradual evolution of international governance similar to what we’ve seen with nuclear technology, but hopefully with more robust cooperation given the shared risks and benefits inherent to ASI.
What form would this take? American inspectors at DeepSeek? Chinese inspectors at OpenAI and its half-dozen rivals?
To draw parallels from nuclear weapons: The START I nuclear disarmament treaty between the US and the USSR included some 12 different types of inspection, which included Russian inspectors at US sites and vice versa. We also have the International Atomic Energy Agency that coordinates various safety measures for inhibiting dangerous uses of nuclear technologies. There are many more techniques and agreements that were cooperatively deployed to improve outcomes.
With AI, we already have precursory elements for this in place, with for example the UK AISI evaluating US-developed models for safety. If AI power and danger levels continue to progress, its development will likely become increasingly government-controlled and monitored. The more it is seen as a national security issue, the more pressure there will be for international cooperation from enemies and allies alike.
This cooperation might include reciprocal inspection regimes, joint safety standards, transparency requirements for training runs above certain compute thresholds, and international verification mechanisms. While military conflict remains a theoretical option, the threshold for such action would be extremely high given both nuclear deterrence and the significant diplomatic costs. Instead, we’d likely see a gradual evolution of international governance similar to what we’ve seen with nuclear technology, but hopefully with more robust cooperation given the shared risks and benefits inherent to ASI.