One should assume that AGI, aligned or unaligned, leads to AI takeover. Even if an AI project somehow threaded the needle of creating a superintelligence whose prime directive was obedience to a particular set of human masters, those masters are just a few steps away from becoming posthuman themselves if they wish e.g. for the same level of intelligence as the AI. And if your AI’s terminal values include, not just obedience to the wishes of humans (whether that’s an autocrat CEO or a world parliament), but rejection of anything that would overthrow human rule, then that’s not really an AI-assisted government, it’s an AI takeover with a luddite prime directive.
The only kind of “AGI world government” that truly leaves humans in charge, is one in which the AGI deletes itself, after giving the government tools and methods to prevent AGI from appearing ever again.
One should assume that AGI, aligned or unaligned, leads to AI takeover. Even if an AI project somehow threaded the needle of creating a superintelligence whose prime directive was obedience to a particular set of human masters, those masters are just a few steps away from becoming posthuman themselves if they wish e.g. for the same level of intelligence as the AI. And if your AI’s terminal values include, not just obedience to the wishes of humans (whether that’s an autocrat CEO or a world parliament), but rejection of anything that would overthrow human rule, then that’s not really an AI-assisted government, it’s an AI takeover with a luddite prime directive.
The only kind of “AGI world government” that truly leaves humans in charge, is one in which the AGI deletes itself, after giving the government tools and methods to prevent AGI from appearing ever again.