I’m yet to read the paper, but my initial reaction is that this is a classic game-theoretical problem where players have to weigh up the incentives to defect or cooperate. For example, I’m not sure if Manhattan Project-style effort for AI in the US is extremely unreasonable when China already has something of that sort.
My weakly held opinion is that you cannot get adversarial nation-states at varying stages of developing a particular technology to mutually hamstring future development. China is unlikely to halt AI development (it is already moving to restrict DeepSeek researchers from traveling) because it expects the US to accelerate AI development and wants to hedge bets by developing AI itself. The US won’t stop AI development because it doesn’t trust China will do so (even with a treaty) and the conversation around the use of AI in military starts to look different when China has already outpaced the US in AI capabilities. Basically, each party wants to be in a position of strength and guarantee mutually assured destruction.
“But if we have an arms race and build superintelligent AI, the entire human race is going to be killed off by a rogue AI.” This is a valid point, but I’ll argue that the odds of getting powerful nation states to pause AI for the “global good” is extremely low. We only need to see that countries like China are still shoring up nuclear weapons despite various treaties aimed at preventing proliferation of nukes.
AFAICT, a plausible strategy is to make sure that the US still keeps up with AI in terms of AI development and slowly open lines of communication later to agree on a collective AI security agreement that protects humanity from the dangers of unaligned superintelligence. The US will be able to approach these negotiations from a place of power (not a place of weakness), which is—by and large—the most important favor in critical negotiations like this one.
I’m yet to read the paper, but my initial reaction is that this is a classic game-theoretical problem where players have to weigh up the incentives to defect or cooperate. For example, I’m not sure if Manhattan Project-style effort for AI in the US is extremely unreasonable when China already has something of that sort.
My weakly held opinion is that you cannot get adversarial nation-states at varying stages of developing a particular technology to mutually hamstring future development. China is unlikely to halt AI development (it is already moving to restrict DeepSeek researchers from traveling) because it expects the US to accelerate AI development and wants to hedge bets by developing AI itself. The US won’t stop AI development because it doesn’t trust China will do so (even with a treaty) and the conversation around the use of AI in military starts to look different when China has already outpaced the US in AI capabilities. Basically, each party wants to be in a position of strength and guarantee mutually assured destruction.
“But if we have an arms race and build superintelligent AI, the entire human race is going to be killed off by a rogue AI.” This is a valid point, but I’ll argue that the odds of getting powerful nation states to pause AI for the “global good” is extremely low. We only need to see that countries like China are still shoring up nuclear weapons despite various treaties aimed at preventing proliferation of nukes.
AFAICT, a plausible strategy is to make sure that the US still keeps up with AI in terms of AI development and slowly open lines of communication later to agree on a collective AI security agreement that protects humanity from the dangers of unaligned superintelligence. The US will be able to approach these negotiations from a place of power (not a place of weakness), which is—by and large—the most important favor in critical negotiations like this one.