Cyberattacks can’t disable anything with any reliability or for more than days to weeks though, and there are dozens of major datacenter campuses from multiple somewhat independent vendors. Hypothetical AI-developed attacks might change that, but then there will also be AI-developed information security, adapting to any known kinds of attacks and stopping them from being effective shortly after. So the MAD analogy seems tenuous, the effect size (of this particular kind of intervention) is much smaller, to the extent that it seems misleading to even mention cyberattacks in this role/context.
Why would this be restricted to cyber attacks? If the CCP believed that ASI was possible, even if they didn’t believe in the alignment problem, the US developing an ASI would plausibly constitute an existential threat to them. It’d mean they lose the game of geopolitics completely and permanently. I don’t think they’d necessarily restrict themselves to covert sabotage in such a situation.
I’m quibbling with cyberattacks specifically being used as a central example throughout in the document and also on the podcasts. They do mention other kinds of attacks, see How to Maintain a MAIM Regime:
AI powers must clarify the escalation ladder of espionage, covert sabotage, overt cyberattacks, possible kinetic strikes, and so on.
The CCP has no reason to believe that the US is even capable of achieving ASI let alone whether they have an advantage over the CCP. No rational actor will go to war over a possibility of a maybe when the numbers could, just as likely be in their favour. E.g. If DeepSeek can almost equal OpenAI with less resources, it would be rational to allocate more resources to DeepSeek before doing anything as risky as trying to sabotage OpenAI that is uncertain to succeeed and more likely to invite uncontrollable retaliatory escalation.
Cyberattacks can’t disable anything with any reliability or for more than days to weeks though, and there are dozens of major datacenter campuses from multiple somewhat independent vendors. Hypothetical AI-developed attacks might change that, but then there will also be AI-developed information security, adapting to any known kinds of attacks and stopping them from being effective shortly after. So the MAD analogy seems tenuous, the effect size (of this particular kind of intervention) is much smaller, to the extent that it seems misleading to even mention cyberattacks in this role/context.
Why would this be restricted to cyber attacks? If the CCP believed that ASI was possible, even if they didn’t believe in the alignment problem, the US developing an ASI would plausibly constitute an existential threat to them. It’d mean they lose the game of geopolitics completely and permanently. I don’t think they’d necessarily restrict themselves to covert sabotage in such a situation.
I’m quibbling with cyberattacks specifically being used as a central example throughout in the document and also on the podcasts. They do mention other kinds of attacks, see How to Maintain a MAIM Regime:
The CCP has no reason to believe that the US is even capable of achieving ASI let alone whether they have an advantage over the CCP. No rational actor will go to war over a possibility of a maybe when the numbers could, just as likely be in their favour. E.g. If DeepSeek can almost equal OpenAI with less resources, it would be rational to allocate more resources to DeepSeek before doing anything as risky as trying to sabotage OpenAI that is uncertain to succeeed and more likely to invite uncontrollable retaliatory escalation.