TL;DR: To mitigate race dynamics, China and the US should deliberately leave themselves open to the sabotage (“MAIMing”) of their frontier AI systems. This gives both countries an option other than “nuke the enemy”/”rush to build superintelligence first” if superintelligence appears imminent: MAIM the opponent’s AI. The deliberately unmitigated risk of being MAIMed also encourages both sides to pursue carefully-planned and communicated AI development, with international observation and cooperation, reducing AINotKillEveryone-ism risks.
The problem with this plan is obvious: with MAD, you know for sure that if you nuke the other guy, you’re gonna get nuked in return. You can’t hit all the silos, all the nuclear submarines. With MAIM, you can’t be so confident: maybe the enemy’s cybersecurity has gotten too good, maybe efficiency has improved and they don’t need all their datacenters, maybe their light AGI has compromised your missile command.
So the paper argues for at least getting as close as possible to assurance that you’ll get MAIMed in return: banning underground datacenters, instituting chip control regimes to block rogue actors, enforcing confidentiality-preserving inspections of frontier AI development.
Definitely worth considering. Appreciate the writeup.
This is creative.
TL;DR: To mitigate race dynamics, China and the US should deliberately leave themselves open to the sabotage (“MAIMing”) of their frontier AI systems. This gives both countries an option other than “nuke the enemy”/”rush to build superintelligence first” if superintelligence appears imminent: MAIM the opponent’s AI. The deliberately unmitigated risk of being MAIMed also encourages both sides to pursue carefully-planned and communicated AI development, with international observation and cooperation, reducing AINotKillEveryone-ism risks.
The problem with this plan is obvious: with MAD, you know for sure that if you nuke the other guy, you’re gonna get nuked in return. You can’t hit all the silos, all the nuclear submarines. With MAIM, you can’t be so confident: maybe the enemy’s cybersecurity has gotten too good, maybe efficiency has improved and they don’t need all their datacenters, maybe their light AGI has compromised your missile command.
So the paper argues for at least getting as close as possible to assurance that you’ll get MAIMed in return: banning underground datacenters, instituting chip control regimes to block rogue actors, enforcing confidentiality-preserving inspections of frontier AI development.
Definitely worth considering. Appreciate the writeup.