the governance philosophy here seems to be “let the companies do as they will and let events unfold as they may”
That is not quite right. The idea is rather that the government does whatever it does by regulating companies, or possibly entering into some soft-nationalization public-private partnership, as opposed to by operating an AGI project on its own (as in the Manhattan model) or by handing it over to an international agency or consortium (as in the CERN and Intelsat models).
There doesn’t seem to be anything here which addresses the situation in which one company tries to take over the world using its AGI, or in which an AGI acting on its own initiative tries to take over the world, etc.
It doesn’t particularly address the situation in which an AGI on its own initiative tries to take over the world. That is a concern common to all of the governance models. In the OGI model, there are two potential veto points: the company itself can choose not to develop or a deploy an AI that it deems too risky, and the host government can prevent the company from developing or deploying an AI that fails to meet some standard that the government stipulates. (In the Manhattan model, there’s only one veto point.)
As for the situation in which one company tries to take over the world using its AGI, the host government may choose to implement safeguards against this (e.g. by closely scrutinizing what AGI corporations are up to). Note that there are analogous concerns in the alternative models, where e.g. a government lab or some other part of a government might try to use AGI for power grabs. (Again, the double veto points in the OGI model might have some advantage here, although the issue is complicated.)
> It doesn’t particularly address the situation in which an AGI on its own initiative tries to take over the world. That is a concern common to all of the governance models
I think this is wrong. The MIRI Technical Governance Team, which I’m part of, recently wrote this research agenda which includes an “Off switch and halt” plan for governing AI. Stopping AI development before superintelligence directly addresses the situation where an ASI tries to take over the world by not allowing such AIs to be built. If you like the frame of “who has a veto”, I think at the very least it’s “every nuclear-armed country has a veto” or something similar.
A deterrence framework—which could be leveraged to avoid ASI being built and thus impacts AI takeover risk—also appears in Superintelligence Strategy.
That is not quite right. The idea is rather that the government does whatever it does by regulating companies, or possibly entering into some soft-nationalization public-private partnership, as opposed to by operating an AGI project on its own (as in the Manhattan model) or by handing it over to an international agency or consortium (as in the CERN and Intelsat models).
It doesn’t particularly address the situation in which an AGI on its own initiative tries to take over the world. That is a concern common to all of the governance models. In the OGI model, there are two potential veto points: the company itself can choose not to develop or a deploy an AI that it deems too risky, and the host government can prevent the company from developing or deploying an AI that fails to meet some standard that the government stipulates. (In the Manhattan model, there’s only one veto point.)
As for the situation in which one company tries to take over the world using its AGI, the host government may choose to implement safeguards against this (e.g. by closely scrutinizing what AGI corporations are up to). Note that there are analogous concerns in the alternative models, where e.g. a government lab or some other part of a government might try to use AGI for power grabs. (Again, the double veto points in the OGI model might have some advantage here, although the issue is complicated.)
I think this is wrong. The MIRI Technical Governance Team, which I’m part of, recently wrote this research agenda which includes an “Off switch and halt” plan for governing AI. Stopping AI development before superintelligence directly addresses the situation where an ASI tries to take over the world by not allowing such AIs to be built. If you like the frame of “who has a veto”, I think at the very least it’s “every nuclear-armed country has a veto” or something similar.
A deterrence framework—which could be leveraged to avoid ASI being built and thus impacts AI takeover risk—also appears in Superintelligence Strategy.