> It doesn’t particularly address the situation in which an AGI on its own initiative tries to take over the world. That is a concern common to all of the governance models
I think this is wrong. The MIRI Technical Governance Team, which I’m part of, recently wrote this research agenda which includes an “Off switch and halt” plan for governing AI. Stopping AI development before superintelligence directly addresses the situation where an ASI tries to take over the world by not allowing such AIs to be built. If you like the frame of “who has a veto”, I think at the very least it’s “every nuclear-armed country has a veto” or something similar.
A deterrence framework—which could be leveraged to avoid ASI being built and thus impacts AI takeover risk—also appears in Superintelligence Strategy.
I think this is wrong. The MIRI Technical Governance Team, which I’m part of, recently wrote this research agenda which includes an “Off switch and halt” plan for governing AI. Stopping AI development before superintelligence directly addresses the situation where an ASI tries to take over the world by not allowing such AIs to be built. If you like the frame of “who has a veto”, I think at the very least it’s “every nuclear-armed country has a veto” or something similar.
A deterrence framework—which could be leveraged to avoid ASI being built and thus impacts AI takeover risk—also appears in Superintelligence Strategy.