Interesting analysis, but this statement is a bit strong. A global safe AI project would be theoretically possible, but would be extremely challenging to solve the co-ordination issues without AI progress dramatically slowing. Then again, all plans are challenging/potentially impossible.
[...]
Another option would be to negotiate a deal where only a few countries are allowed to develop AGI, but in exchange, the UN gets to send observers and provide input on the development of the technology.
“co-ordination issues” is a major euphemism here: such a global safe AI would not just require the same kind of coordination one generally expect in relations between nation-states (even in the eyes of the most idealistic liberal-internationalists), but effectively having already achieved a world government and species-wide agreement on a same moral philosophy – which may itself require having already achieved at the very least a post-scarcity economy. This is more or less what I mean in the last bullet point by “and only then (possibly) building aligned ASI”.
Alternatively, an aligned ASI could be explicitly instructed to preserve existing institutions. Perhaps it’d be limited to providing advice, or, strongly, it wouldn’t intervene except by preventing existential or near-existential risks.
Depending on whether this advice is available to everyone or only to the leadership of existing institutions, this would fall either under Tool AI (which is one of the approaches in my third bullet point) or state-aligned (but CEV-unaligned) ASI (a known x-risk and plausibly a s-risk).
Yet another possibility is that the world splits into factions which produce their own AGI’s and then these AGIs merge.
If the merged AGIs are all CEV-unaligned, I don’t see why we should assume that, just because it is a merger from AGIs from across the world, the merged AGI would suddenly be CEV-aligned.
“co-ordination issues” is a major euphemism here: such a global safe AI would not just require the same kind of coordination one generally expect in relations between nation-states (even in the eyes of the most idealistic liberal-internationalists), but effectively having already achieved a world government and species-wide agreement on a same moral philosophy – which may itself require having already achieved at the very least a post-scarcity economy. This is more or less what I mean in the last bullet point by “and only then (possibly) building aligned ASI”.
Depending on whether this advice is available to everyone or only to the leadership of existing institutions, this would fall either under Tool AI (which is one of the approaches in my third bullet point) or state-aligned (but CEV-unaligned) ASI (a known x-risk and plausibly a s-risk).
If the merged AGIs are all CEV-unaligned, I don’t see why we should assume that, just because it is a merger from AGIs from across the world, the merged AGI would suddenly be CEV-aligned.