An aligned ASI, if it were possible, would be capable of a degree of perfection beyond that of human institutions.
The corollary of this is that an aligned ASI in the strong sense of “aligned” used here would have to dissolve currently existing human institutions, and the latter will obviously oppose that. As it stand, even if we solve technical alignment (which I do think is plausible at this rate), we’ll end up with either an ASI aligned to a nation-state, or a corporate ASI turning all available matter in economium, both of which are x-risks in the longtermist sense (and maybe even s-risks e.g. in the former case if Xi or Trump are bioconservative and speciesist, which I’m fairly sure they are).
Suppose that a group of democratic republics form a consortium to develop AI, and there’s a lot of politicking during the process—some interest groups have unusually large influence, others get shafted—in other words, the result looks just like the products of modern democracies. Alternatively, suppose a group of rebel nerds develops an AI in their basement, and instructs the AI to poll everyone in the world—dropping cellphones to anyone who doesn’t have them—and do whatever the majority says. Which of these do you think is more “democratic,” and would you feel safe with either?
To re-use @Richard_Ngo’s framework of the three waves of AI safety, the first generation around SIAI/MIRI had a tendency to believe that that creating AGI was mostly an engineering problem and dismissed the lines of thought that predicted modern scaling laws. So the idea of being that “group of rebel nerds” and creating Friendly AGI in your basement (which was ostensibly SIAI/MIRI’s goal) could have seemed realistic to them back then.
Then the deep learning revolution of the 2010s happened and it turned out that the first wave of AI safety was wrong and the bottleneck to AGI really is access to large amounts of compute, which you can only get through financial backing by corporations (for DeepMind, Google; for OpenAI, Microsoft; for Anthropic, Amazon), and which is easy for the state to clamp down on.
For the third wave of AI safety (quoting Ngo again), there are different ways you can go from this:
Push for your preferred state or corporation to achieve aligned (in the weak sense) ASI, thus trusting them with the entire long-term future of humanity
The more realistic golden mean between those two plans: develop artificial intelligence in a decentralized way in a way that this still let us first achieve a post-scarcity economy, longtermist institutional reform, possibly a long reflection, and only then (possibly) building aligned ASI. This is the common throughline between differential technological development, d/acc, coceleration, Tool AI, organic alignment, mutual alignment, cyborgism, and other related ideas to me.
The corollary of this is that an aligned ASI in the strong sense of “aligned” used here would have to dissolve currently existing human institutions, and the latter will obviously oppose that
Interesting analysis, but this statement is a bit strong. A global safe AI project would be theoretically possible, but would be extremely challenging to solve the co-ordination issues without AI progress dramatically slowing. Then again, all plans are challenging/potentially impossible.
Alternatively, an aligned ASI could be explicitly instructed to preserve existing institutions. Perhaps it’d be limited to providing advice, or (stronger) it wouldn’t intervene except by preventing existential or near-existential risks.
Yet another possibility is that the world splits into factions which produce their own AGI’s and then these AGIs merge.
A fourth option would be to negotiate a deal where only a few countries are allowed to develop AGI, but in exchange, the UN gets to send observers and provide input on the development of the technology.
Interesting analysis, but this statement is a bit strong. A global safe AI project would be theoretically possible, but would be extremely challenging to solve the co-ordination issues without AI progress dramatically slowing. Then again, all plans are challenging/potentially impossible.
[...]
Another option would be to negotiate a deal where only a few countries are allowed to develop AGI, but in exchange, the UN gets to send observers and provide input on the development of the technology.
“co-ordination issues” is a major euphemism here: such a global safe AI would not just require the same kind of coordination one generally expect in relations between nation-states (even in the eyes of the most idealistic liberal-internationalists), but effectively having already achieved a world government and species-wide agreement on a same moral philosophy – which may itself require having already achieved at the very least a post-scarcity economy. This is more or less what I mean in the last bullet point by “and only then (possibly) building aligned ASI”.
Alternatively, an aligned ASI could be explicitly instructed to preserve existing institutions. Perhaps it’d be limited to providing advice, or, strongly, it wouldn’t intervene except by preventing existential or near-existential risks.
Depending on whether this advice is available to everyone or only to the leadership of existing institutions, this would fall either under Tool AI (which is one of the approaches in my third bullet point) or state-aligned (but CEV-unaligned) ASI (a known x-risk and plausibly a s-risk).
Yet another possibility is that the world splits into factions which produce their own AGI’s and then these AGIs merge.
If the merged AGIs are all CEV-unaligned, I don’t see why we should assume that, just because it is a merger from AGIs from across the world, the merged AGI would suddenly be CEV-aligned.
The corollary of this is that an aligned ASI in the strong sense of “aligned” used here would have to dissolve currently existing human institutions, and the latter will obviously oppose that. As it stand, even if we solve technical alignment (which I do think is plausible at this rate), we’ll end up with either an ASI aligned to a nation-state, or a corporate ASI turning all available matter in economium, both of which are x-risks in the longtermist sense (and maybe even s-risks e.g. in the former case if Xi or Trump are bioconservative and speciesist, which I’m fairly sure they are).
As Yudkowsky wrote nearly 18 years ago in Applause Lights:
To re-use @Richard_Ngo’s framework of the three waves of AI safety, the first generation around SIAI/MIRI had a tendency to believe that that creating AGI was mostly an engineering problem and dismissed the lines of thought that predicted modern scaling laws. So the idea of being that “group of rebel nerds” and creating Friendly AGI in your basement (which was ostensibly SIAI/MIRI’s goal) could have seemed realistic to them back then.
Then the deep learning revolution of the 2010s happened and it turned out that the first wave of AI safety was wrong and the bottleneck to AGI really is access to large amounts of compute, which you can only get through financial backing by corporations (for DeepMind, Google; for OpenAI, Microsoft; for Anthropic, Amazon), and which is easy for the state to clamp down on.
And then the AI boom of the 2020s happened and the states themselves are now more and more conscious of the threat of AGI. Applause Lights was wrote in 2007. I would predict that by 2027 a private project with the explicit goal of developing AGI to overthrow all existing national governments would receive about the same public reaction as a private project with the explicit goal of developing nuclear weapons to overthrow all existing national governments.
For the third wave of AI safety (quoting Ngo again), there are different ways you can go from this:
Push for your preferred state or corporation to achieve aligned (in the weak sense) ASI, thus trusting them with the entire long-term future of humanity
Wholly embrace that you have a comic-book super-villain plan to take over the world and prepare for state repression
The more realistic golden mean between those two plans: develop artificial intelligence in a decentralized way in a way that this still let us first achieve a post-scarcity economy, longtermist institutional reform, possibly a long reflection, and only then (possibly) building aligned ASI. This is the common throughline between differential technological development, d/acc, coceleration, Tool AI, organic alignment, mutual alignment, cyborgism, and other related ideas to me.
Interesting analysis, but this statement is a bit strong. A global safe AI project would be theoretically possible, but would be extremely challenging to solve the co-ordination issues without AI progress dramatically slowing. Then again, all plans are challenging/potentially impossible.
Alternatively, an aligned ASI could be explicitly instructed to preserve existing institutions. Perhaps it’d be limited to providing advice, or (stronger) it wouldn’t intervene except by preventing existential or near-existential risks.
Yet another possibility is that the world splits into factions which produce their own AGI’s and then these AGIs merge.
A fourth option would be to negotiate a deal where only a few countries are allowed to develop AGI, but in exchange, the UN gets to send observers and provide input on the development of the technology.
“co-ordination issues” is a major euphemism here: such a global safe AI would not just require the same kind of coordination one generally expect in relations between nation-states (even in the eyes of the most idealistic liberal-internationalists), but effectively having already achieved a world government and species-wide agreement on a same moral philosophy – which may itself require having already achieved at the very least a post-scarcity economy. This is more or less what I mean in the last bullet point by “and only then (possibly) building aligned ASI”.
Depending on whether this advice is available to everyone or only to the leadership of existing institutions, this would fall either under Tool AI (which is one of the approaches in my third bullet point) or state-aligned (but CEV-unaligned) ASI (a known x-risk and plausibly a s-risk).
If the merged AGIs are all CEV-unaligned, I don’t see why we should assume that, just because it is a merger from AGIs from across the world, the merged AGI would suddenly be CEV-aligned.