It’s nice that the Less Wrong hoi polloi get to comment on a strategy document that has such an elite origin. Coauthors include Eric Schmidt, who may have been the most elite-influential thinker on AI in the Biden years, and xAI’s safety advisor @Dan H, who can’t be too far removed from David Sacks, Trump’s AI czar. That covers both sides of American politics; the third author, Alexandr Wang, is also American, but he’s Chinese-American, so it is as if we’re trying to cover all the geopolitical factions that have a say in the AI race.
However, the premises of the document are simply wrong (“in my opinion”). Section 3.4 gives us the big picture, in that it lists four strategies for dealing with the rise of superintelligence: Hands Off Strategy, Moratorium Strategy, Monopoly Strategy, and Multipolar Strategy, the latter being the one argued for in this paper. And the Multipolar Strategy argued for, combines mutual assured destruction (MAD) between Chinese and American AI systems, and consensus to prevent proliferation of AI technology to other actors such as terrorists.
I get that this is hardheaded geostrategic thinking. It is a genuine advance on that front. But—the rise of superintelligence means the end of human rule on Earth, no matter who makes it. The world will be governed either by a system of entirely nonhuman AIs, or entities that are AI-human hybrids but in which the AI part must necessarily dominate, if they are to keep up with the “intelligence recursion” mentioned by the paper.
Section 4.1 goes into more detail. US or Chinese bid for dominance is described as unstable, because eventually you will get a cyber war in which the AI infrastructure of both sides is destroyed. A mutual moratorium is also described as unstable, because either side could defect at any time. The paper claims that the most stable situation, which is also the default, is one in which the mutually destructive cyber war is possible, but neither side initiates it.
This is a new insight for me—the idea of cyber war targeting AI infrastructure. It’s a step up in sophistication from “air strikes against data centers”. And at least cyber-MAD is far less destructive than nuclear MAD. I am willing to suppose that cyber-MAD already exists, and that this paper is an attempt to embed the rise of AI into that framework.
But even cyber-MAD is unstable, because of AI takeover. The inevitable winner of an AI race between China and America is not China or America, it’s just some AI. So I definitely appreciate the clarification of interstate relations in this penultimate stage of the AI race. But I still see no alternative to trying to solve the problem of “superalignment”, and for me that means making superintelligent AI that is ethical and human-friendly even when completely autonomous—and doing that research in public, where all the AI labs can draw on it.
It’s nice that the Less Wrong hoi polloi get to comment on a strategy document that has such an elite origin. Coauthors include Eric Schmidt, who may have been the most elite-influential thinker on AI in the Biden years, and xAI’s safety advisor @Dan H, who can’t be too far removed from David Sacks, Trump’s AI czar. That covers both sides of American politics; the third author, Alexandr Wang, is also American, but he’s Chinese-American, so it is as if we’re trying to cover all the geopolitical factions that have a say in the AI race.
However, the premises of the document are simply wrong (“in my opinion”). Section 3.4 gives us the big picture, in that it lists four strategies for dealing with the rise of superintelligence: Hands Off Strategy, Moratorium Strategy, Monopoly Strategy, and Multipolar Strategy, the latter being the one argued for in this paper. And the Multipolar Strategy argued for, combines mutual assured destruction (MAD) between Chinese and American AI systems, and consensus to prevent proliferation of AI technology to other actors such as terrorists.
I get that this is hardheaded geostrategic thinking. It is a genuine advance on that front. But—the rise of superintelligence means the end of human rule on Earth, no matter who makes it. The world will be governed either by a system of entirely nonhuman AIs, or entities that are AI-human hybrids but in which the AI part must necessarily dominate, if they are to keep up with the “intelligence recursion” mentioned by the paper.
Section 4.1 goes into more detail. US or Chinese bid for dominance is described as unstable, because eventually you will get a cyber war in which the AI infrastructure of both sides is destroyed. A mutual moratorium is also described as unstable, because either side could defect at any time. The paper claims that the most stable situation, which is also the default, is one in which the mutually destructive cyber war is possible, but neither side initiates it.
This is a new insight for me—the idea of cyber war targeting AI infrastructure. It’s a step up in sophistication from “air strikes against data centers”. And at least cyber-MAD is far less destructive than nuclear MAD. I am willing to suppose that cyber-MAD already exists, and that this paper is an attempt to embed the rise of AI into that framework.
But even cyber-MAD is unstable, because of AI takeover. The inevitable winner of an AI race between China and America is not China or America, it’s just some AI. So I definitely appreciate the clarification of interstate relations in this penultimate stage of the AI race. But I still see no alternative to trying to solve the problem of “superalignment”, and for me that means making superintelligent AI that is ethical and human-friendly even when completely autonomous—and doing that research in public, where all the AI labs can draw on it.