I don’t quite understand what point is being made here.
The way I see it, we already inhabit a world in which half a dozen large companies in America and China are pressing towards the creation of superhuman intelligence, something which naturally leads to the loss of human control over the world unless human beings are somehow embedded in these new entities.
This essay seems to propose that we view this situation as a “governance model for AGI”, alongside other scenarios like an AGI Manhattan Project and an AGI CERN that have not come to pass. But isn’t the governance philosophy here, “let the companies do as they will and let events unfold as they may”? I don’t see anything that addresses the situation in which one company tries to take over the world using its AGI, or in which an AGI acting on its own initiative tries to take over the world, etc. Did I miss something?
the governance philosophy here seems to be “let the companies do as they will and let events unfold as they may”
That is not quite right. The idea is rather that the government does whatever it does by regulating companies, or possibly entering into some soft-nationalization public-private partnership, as opposed to by operating an AGI project on its own (as in the Manhattan model) or by handing it over to an international agency or consortium (as in the CERN and Intelsat models).
There doesn’t seem to be anything here which addresses the situation in which one company tries to take over the world using its AGI, or in which an AGI acting on its own initiative tries to take over the world, etc.
It doesn’t particularly address the situation in which an AGI on its own initiative tries to take over the world. That is a concern common to all of the governance models. In the OGI model, there are two potential veto points: the company itself can choose not to develop or a deploy an AI that it deems too risky, and the host government can prevent the company from developing or deploying an AI that fails to meet some standard that the government stipulates. (In the Manhattan model, there’s only one veto point.)
As for the situation in which one company tries to take over the world using its AGI, the host government may choose to implement safeguards against this (e.g. by closely scrutinizing what AGI corporations are up to). Note that there are analogous concerns in the alternative models, where e.g. a government lab or some other part of a government might try to use AGI for power grabs. (Again, the double veto points in the OGI model might have some advantage here, although the issue is complicated.)
I don’t quite understand what point is being made here.
The way I see it, we already inhabit a world in which half a dozen large companies in America and China are pressing towards the creation of superhuman intelligence, something which naturally leads to the loss of human control over the world unless human beings are somehow embedded in these new entities.
This essay seems to propose that we view this situation as a “governance model for AGI”, alongside other scenarios like an AGI Manhattan Project and an AGI CERN that have not come to pass. But isn’t the governance philosophy here, “let the companies do as they will and let events unfold as they may”? I don’t see anything that addresses the situation in which one company tries to take over the world using its AGI, or in which an AGI acting on its own initiative tries to take over the world, etc. Did I miss something?
That is not quite right. The idea is rather that the government does whatever it does by regulating companies, or possibly entering into some soft-nationalization public-private partnership, as opposed to by operating an AGI project on its own (as in the Manhattan model) or by handing it over to an international agency or consortium (as in the CERN and Intelsat models).
It doesn’t particularly address the situation in which an AGI on its own initiative tries to take over the world. That is a concern common to all of the governance models. In the OGI model, there are two potential veto points: the company itself can choose not to develop or a deploy an AI that it deems too risky, and the host government can prevent the company from developing or deploying an AI that fails to meet some standard that the government stipulates. (In the Manhattan model, there’s only one veto point.)
As for the situation in which one company tries to take over the world using its AGI, the host government may choose to implement safeguards against this (e.g. by closely scrutinizing what AGI corporations are up to). Note that there are analogous concerns in the alternative models, where e.g. a government lab or some other part of a government might try to use AGI for power grabs. (Again, the double veto points in the OGI model might have some advantage here, although the issue is complicated.)