It always seemed outlandish that in The Animatrix, the first AI city (01) was located in the Middle East…
If we had limitless time, it would be interesting to know how this happened. I guess the prehistory of it involved Saudi Vision 2030 (e.g. the desert city Noem), and the general hypermodernization of Dubai. You can see precursors in the robot Sophia getting Saudi citizenship in 2017, and the UAE’s “Falcon” LLM in 2023.
But the initiative must have come from the American side—some intersection of the geopolitical brain trust around Trump, and the AI/crypto brain trust around David Sacks. The audacity of CEOs who can pick a country on the map and say, let’s build a whole new technological facility here, combined with the audacity of grand strategists who can pick a region of the world and say, let’s do a huge techno-economic deal with our allies here.
There must have been some individual who first thought, hey, the Gulf Arabs have lots of money and electricity, that would be a good place to build all these AI data centers we’re going to need; I wonder who it was. Maybe Ivanka Trump was telling Jared Kushner about Aschenbrenner’s “Situational Awareness”, they put 2 and 2 together, and it became part of the strategic mix along with “Trump Gaza number one”, the new Syria, and whatever they’re negotiating with Iran.
My take on this: AGI has existed since 2022 (ChatGPT). There are now multiple companies in America and in China which have AGI-level agents. (It would be good to have a list of countries which are next in line to develop indigenous AGI capability.) Given that we are already in a state of coexistence between human and AGI, the next major transition is ASI, and that means, if not necessarily the end of humanity, the end of human control over human affairs. This most likely means that the dominant intelligence in the world is entirely nonhuman AI; it could also refer to some human-AI symbiosis, but again, it won’t be natural humanity in charge.
A world in which the development of AGI is allowed, let alone a world in which there is a race to create ever more powerful AI, is a world which by default is headed towards ASI and the dethronement of natural humanity. That is our world already, even if our tech and government leadership manage to not see it that way.
So the most consequential thing that humans can care about right now is the transition to ASI. You can try to stop it from ever happening. or you can try to shape it in advance. Once it happens, it’s over, humans per se have no further say, if they still exist they are at the mercy of the transhuman or posthuman agents now in charge.
Now let me try to analyze your essay from this point of view. Your essay is meant as a critique of the paradigm according to which there is a race between America and China to create powerful AI, as if all that either side needs to care about is getting there first. Your message is that even if America gets to powerful AI (or safe powerful AI) first, the possibility of China (and in fact anyone else) developing the same capability would still remain. I see two main suggestions: a “mutual containment” treaty, in which both countries place bounds on their development of AI along with means of verifying that the bounds are being obeyed; and spreading around defensive measures, which make it harder for powerful AI to impose itself on the world.
My take is that mutual containment really means a mutual commitment to stop the creation of ASI, a commitment which to be meaningful ultimately needs to be followed by everyone on Earth. It is a coherent position, but it’s an uphill struggle since current trends are all in the other direction. On the other hand, I regard defense against ASI as impossible. Possibly there are meaningful defensive measures against lesser forms of AI, but ASI’s relationship to human intelligence is like that of the best computer chess programs to the best human chess players—the latter simply have no chance in such a game.
On the other hand, the only truly safe form of powerful AI is ASI governed by a value system which, if placed in complete control of the Earth, would still be something we could live with, or even something that is good for us. Anything less, e.g. a legal order in which powerful AI exists but there is a ban on further development, is unstable against further development to the ASI level.
So there is a sense in which development of safe powerful AI really is all that matters, because it really has to mean safe ASI, and that is not something which will stay behind borders. If America, China, or any other country achieves ASI, that is success for everyone. But it does also imply the loss of sovereignty for natural humanity, in favor of the hypothetical benevolent superintelligent agent(s).