Tunneling is always a concern in corporate structures, but alternative organizational forms suffer similar problems. Government officials, university department heads, and NGO executives also sometimes misuse the powers of their office to pursue personal or factional interests rather than the official mission of the organization they are supposed to represent. We would need a reason for thinking that this problem is worse in the corporate case in order for it to be a consideration against the OGI model.
As for the suggestion that governments (nationally or internationally) should prohibit profit-generating activities by AI labs that have major negative externalities, this is fully consistent with the OGI model (see section “The other half of the picture”, on p. 4). AGI corporations would be subject to government regulation and oversight, just like other corporations are—and, plausibly, the intensity of government involvement would be much greater in this case, given the potentially transformative impacts of the technology they are developing. It would also consistent with the OGI model for governments to offer contracts or prizes for various prosocial applications of AI.
We would need a reason for thinking that this problem is worse in the corporate case in order for it to be a consideration against the OGI model.
Could we get info on this by looking at metrics of corruption? I’m not familiar with the field, but I know it’s been busy recently, and maybe there’s some good papers that put the private and public sectors on the same scale. A quick google scholar search mostly just convinced me that I’d be better served asking an expert.
As for the suggestion that governments (nationally or internationally) should prohibit profit-generating activities by AI labs that have major negative externalities, this is fully consistent with the OGI model
Well, I agree denotationally, but in appendix 4 when you’re comparing OGI with other models, your comparison includes points like “OGI obviates the need for massive government funding” and “agreeable to many incumbents, including current AI company leadership, personnel, and investors”. If governments enact a policy that maintains the ability to buy shares in AI labs, but requires massive government funding and is disagreeable to incumbents, that seems to be part of a different story (and with a different story about how you get trustworthiness, fair distribution, etc.) than the story you’re telling about OGI.
Could we get info on this by looking at metrics of corruption? I’m not familiar with the field, but I know it’s been busy recently, and maybe there’s some good papers that put the private and public sectors on the same scale. A quick google scholar search mostly just convinced me that I’d be better served asking an expert.
I suspect it would be difficult to get much useful signal on this from the academic literature. This particular issue might instead come down to how much you trust the various specific persons that are the most likely corporate AI leaders versus some impression of how trustworthy, wholesome, and wise the key people inside or controlling a government-run AGI program would be (in the U.S. or China, over the coming years).
Btw, I’m thinking of the OGI model as offering something of a dual veto structure—in order for something to proceed, it would have be favored by both the corporation and the host government (in contrast to an AGI Manhattan project, where it would just need to be favored by the government). So at least the potential may exist for there to be more checks and balances and oversight in the corporate case, especially in the versions that involve some sort of very soft nationalization.
your comparison includes points like “OGI obviates the need for massive government funding” … If governments enact a policy that maintains the ability to buy shares in AI labs, but requires massive government funding and is disagreeable to incumbents, that seems to be part of a different story (and with a different story about how you get trustworthiness, fair distribution, etc.) than the story you’re telling about OGI.
In the OGI model, governments have the option to buy shares but also the option not to. It doesn’t require government funding, but if one thinks that it would be good for governments to spend money some on AGI-related stuff then they could do so in the OGI model just as well as in other models—in some countries, maybe even more easily, since e.g. some pension funds and sovereign wealth funds could more easily be used to buy stocks than to be clawed back and used to fund a Manhattan project. Also, I’m imagining that it would be less disagreeable to incumbents (especially key figures in AI labs and their investors) for governments to invest money in their companies than to have their companies shut down or nationalized or outcompeted by a government-run project.
Btw, I’m thinking of the OGI model as offering something of a dual veto structure—in order for something to proceed, it would have be favored by both the corporation and the host government (in contrast to an AGI Manhattan project, where it would just need to be favored by the government). So at least the potential may exist for there to be more checks and balances and oversight in the corporate case, especially in the versions that involve some sort of very soft nationalization.
Tunneling is always a concern in corporate structures, but alternative organizational forms suffer similar problems. Government officials, university department heads, and NGO executives also sometimes misuse the powers of their office to pursue personal or factional interests rather than the official mission of the organization they are supposed to represent. We would need a reason for thinking that this problem is worse in the corporate case in order for it to be a consideration against the OGI model.
As for the suggestion that governments (nationally or internationally) should prohibit profit-generating activities by AI labs that have major negative externalities, this is fully consistent with the OGI model (see section “The other half of the picture”, on p. 4). AGI corporations would be subject to government regulation and oversight, just like other corporations are—and, plausibly, the intensity of government involvement would be much greater in this case, given the potentially transformative impacts of the technology they are developing. It would also consistent with the OGI model for governments to offer contracts or prizes for various prosocial applications of AI.
Could we get info on this by looking at metrics of corruption? I’m not familiar with the field, but I know it’s been busy recently, and maybe there’s some good papers that put the private and public sectors on the same scale. A quick google scholar search mostly just convinced me that I’d be better served asking an expert.
Well, I agree denotationally, but in appendix 4 when you’re comparing OGI with other models, your comparison includes points like “OGI obviates the need for massive government funding” and “agreeable to many incumbents, including current AI company leadership, personnel, and investors”. If governments enact a policy that maintains the ability to buy shares in AI labs, but requires massive government funding and is disagreeable to incumbents, that seems to be part of a different story (and with a different story about how you get trustworthiness, fair distribution, etc.) than the story you’re telling about OGI.
I suspect it would be difficult to get much useful signal on this from the academic literature. This particular issue might instead come down to how much you trust the various specific persons that are the most likely corporate AI leaders versus some impression of how trustworthy, wholesome, and wise the key people inside or controlling a government-run AGI program would be (in the U.S. or China, over the coming years).
Btw, I’m thinking of the OGI model as offering something of a dual veto structure—in order for something to proceed, it would have be favored by both the corporation and the host government (in contrast to an AGI Manhattan project, where it would just need to be favored by the government). So at least the potential may exist for there to be more checks and balances and oversight in the corporate case, especially in the versions that involve some sort of very soft nationalization.
In the OGI model, governments have the option to buy shares but also the option not to. It doesn’t require government funding, but if one thinks that it would be good for governments to spend money some on AGI-related stuff then they could do so in the OGI model just as well as in other models—in some countries, maybe even more easily, since e.g. some pension funds and sovereign wealth funds could more easily be used to buy stocks than to be clawed back and used to fund a Manhattan project. Also, I’m imagining that it would be less disagreeable to incumbents (especially key figures in AI labs and their investors) for governments to invest money in their companies than to have their companies shut down or nationalized or outcompeted by a government-run project.
Interesting, thanks.