My take on this: AGI has existed since 2022 (ChatGPT). There are now multiple companies in America and in China which have AGI-level agents. (It would be good to have a list of countries which are next in line to develop indigenous AGI capability.) Given that we are already in a state of coexistence between human and AGI, the next major transition is ASI, and that means, if not necessarily the end of humanity, the end of human control over human affairs. This most likely means that the dominant intelligence in the world is entirely nonhuman AI; it could also refer to some human-AI symbiosis, but again, it won’t be natural humanity in charge.
A world in which the development of AGI is allowed, let alone a world in which there is a race to create ever more powerful AI, is a world which by default is headed towards ASI and the dethronement of natural humanity. That is our world already, even if our tech and government leadership manage to not see it that way.
So the most consequential thing that humans can care about right now is the transition to ASI. You can try to stop it from ever happening. or you can try to shape it in advance. Once it happens, it’s over, humans per se have no further say, if they still exist they are at the mercy of the transhuman or posthuman agents now in charge.
Now let me try to analyze your essay from this point of view. Your essay is meant as a critique of the paradigm according to which there is a race between America and China to create powerful AI, as if all that either side needs to care about is getting there first. Your message is that even if America gets to powerful AI (or safe powerful AI) first, the possibility of China (and in fact anyone else) developing the same capability would still remain. I see two main suggestions: a “mutual containment” treaty, in which both countries place bounds on their development of AI along with means of verifying that the bounds are being obeyed; and spreading around defensive measures, which make it harder for powerful AI to impose itself on the world.
My take is that mutual containment really means a mutual commitment to stop the creation of ASI, a commitment which to be meaningful ultimately needs to be followed by everyone on Earth. It is a coherent position, but it’s an uphill struggle since current trends are all in the other direction. On the other hand, I regard defense against ASI as impossible. Possibly there are meaningful defensive measures against lesser forms of AI, but ASI’s relationship to human intelligence is like that of the best computer chess programs to the best human chess players—the latter simply have no chance in such a game.
On the other hand, the only truly safe form of powerful AI is ASI governed by a value system which, if placed in complete control of the Earth, would still be something we could live with, or even something that is good for us. Anything less, e.g. a legal order in which powerful AI exists but there is a ban on further development, is unstable against further development to the ASI level.
So there is a sense in which development of safe powerful AI really is all that matters, because it really has to mean safe ASI, and that is not something which will stay behind borders. If America, China, or any other country achieves ASI, that is success for everyone. But it does also imply the loss of sovereignty for natural humanity, in favor of the hypothetical benevolent superintelligent agent(s).
My take is that mutual containment really means a mutual commitment to stop the creation of ASI, a commitment which to be meaningful ultimately needs to be followed by everyone on Earth.
I infer that you believe I disagree with this?
My aim is something like 1) if folks are going to race with China, at least recognize how hard it’ll be and what it would actually take to end the competition (don’t miscalculate things as wins that aren’t) and 2) help people notice that the factors that’d be necessary for “winning” would also support cooperation to head off the race, and so it’s more possible than they might think. (Or, winning in the race framework is even hard than they think.)
ASI governed by a value system which, if placed in complete control of the Earth, would still be something we could live with,
That’s exactly my point. However, once the value system is defined, it will either lock mankind in or be corrigible. The former case contains options like my take where the AI only provides everyone with access to education and enforces only universally agreed political opinions or[1] the situations where the AI builds the Deep Utopia or governs the world, criminalising social parasitism in the whole world. The race to ASI might also be motivated with the belief that it will empower the creators’ state or enforce the state’s ethos.
The first example of the ASI’s role, unlike the latter two, might be considered not as a lock-in, but as another system where mankind can govern itself.
My take on this: AGI has existed since 2022 (ChatGPT). There are now multiple companies in America and in China which have AGI-level agents. (It would be good to have a list of countries which are next in line to develop indigenous AGI capability.) Given that we are already in a state of coexistence between human and AGI, the next major transition is ASI, and that means, if not necessarily the end of humanity, the end of human control over human affairs. This most likely means that the dominant intelligence in the world is entirely nonhuman AI; it could also refer to some human-AI symbiosis, but again, it won’t be natural humanity in charge.
A world in which the development of AGI is allowed, let alone a world in which there is a race to create ever more powerful AI, is a world which by default is headed towards ASI and the dethronement of natural humanity. That is our world already, even if our tech and government leadership manage to not see it that way.
So the most consequential thing that humans can care about right now is the transition to ASI. You can try to stop it from ever happening. or you can try to shape it in advance. Once it happens, it’s over, humans per se have no further say, if they still exist they are at the mercy of the transhuman or posthuman agents now in charge.
Now let me try to analyze your essay from this point of view. Your essay is meant as a critique of the paradigm according to which there is a race between America and China to create powerful AI, as if all that either side needs to care about is getting there first. Your message is that even if America gets to powerful AI (or safe powerful AI) first, the possibility of China (and in fact anyone else) developing the same capability would still remain. I see two main suggestions: a “mutual containment” treaty, in which both countries place bounds on their development of AI along with means of verifying that the bounds are being obeyed; and spreading around defensive measures, which make it harder for powerful AI to impose itself on the world.
My take is that mutual containment really means a mutual commitment to stop the creation of ASI, a commitment which to be meaningful ultimately needs to be followed by everyone on Earth. It is a coherent position, but it’s an uphill struggle since current trends are all in the other direction. On the other hand, I regard defense against ASI as impossible. Possibly there are meaningful defensive measures against lesser forms of AI, but ASI’s relationship to human intelligence is like that of the best computer chess programs to the best human chess players—the latter simply have no chance in such a game.
On the other hand, the only truly safe form of powerful AI is ASI governed by a value system which, if placed in complete control of the Earth, would still be something we could live with, or even something that is good for us. Anything less, e.g. a legal order in which powerful AI exists but there is a ban on further development, is unstable against further development to the ASI level.
So there is a sense in which development of safe powerful AI really is all that matters, because it really has to mean safe ASI, and that is not something which will stay behind borders. If America, China, or any other country achieves ASI, that is success for everyone. But it does also imply the loss of sovereignty for natural humanity, in favor of the hypothetical benevolent superintelligent agent(s).
Really appreciate the thorough engagement -
I infer that you believe I disagree with this?
My aim is something like 1) if folks are going to race with China, at least recognize how hard it’ll be and what it would actually take to end the competition (don’t miscalculate things as wins that aren’t) and 2) help people notice that the factors that’d be necessary for “winning” would also support cooperation to head off the race, and so it’s more possible than they might think. (Or, winning in the race framework is even hard than they think.)
That’s exactly my point. However, once the value system is defined, it will either lock mankind in or be corrigible. The former case contains options like my take where the AI only provides everyone with access to education and enforces only universally agreed political opinions or[1] the situations where the AI builds the Deep Utopia or governs the world, criminalising social parasitism in the whole world. The race to ASI might also be motivated with the belief that it will empower the creators’ state or enforce the state’s ethos.
The first example of the ASI’s role, unlike the latter two, might be considered not as a lock-in, but as another system where mankind can govern itself.