I didn’t feel like there was a serious enough discussion of why people might not like the status quo.
Corporations even with widely held shares often disproportionately benefit those with more personal ability to direct the corporation. If people are concerned about corporations gaining non-monetary forms of influence, this is a public problem that’s not addressed by the status quo. (A recent example would be xAI biasing Grok toward the US Republican party, which is presumably intended to influence users of their site. A future example is the builders of a superintelligence influencing it to benefit them over other people, including over other shareholders.)
The profit motive inside corporations can be “corrupting”—causing individuals in the corporation to act against the public interest (and sometimes even against the long-term interest of the corporation) through selection, persuasion, or coercion. The tobacco and fossil fuel industries are classic representatives, more modern ones might be in cryptocurrency (harms here mainly involve breaking the law, but we shouldn’t assume that people won’t break the law when incentivized to) or online gambling.
Another model to compare to might be the one proposed in AI For Humanity (Ma, Ong, Tan) - the book as a whole isn’t all that, but the model is a good contribution. It’s something like “international climate policy for AGI.”
Internationally restrict conventional profit-generating activity by AI labs, particularly that with negative downsides (e.g. those that might end up optimizing “against” people [persuasion, optimization for engagement], or those that fuel an unsafe race to superintelligence [imposing both a strict windfall clause, and also going after local incentives like profit from AI agents])
Provide large incentives (e.g. contracts, prizes) for prosocial uses of AI. (The book example is the UN sustainable development goals: clean water, education, preserving nature, no famine, etc. One might try to figure out how to add AI safety or artificial ethics to the set of prosocial uses.)
Tunneling is always a concern in corporate structures, but alternative organizational forms suffer similar problems. Government officials, university department heads, and NGO executives also sometimes misuse the powers of their office to pursue personal or factional interests rather than the official mission of the organization they are supposed to represent. We would need a reason for thinking that this problem is worse in the corporate case in order for it to be a consideration against the OGI model.
As for the suggestion that governments (nationally or internationally) should prohibit profit-generating activities by AI labs that have major negative externalities, this is fully consistent with the OGI model (see section “The other half of the picture”, on p. 4). AGI corporations would be subject to government regulation and oversight, just like other corporations are—and, plausibly, the intensity of government involvement would be much greater in this case, given the potentially transformative impacts of the technology they are developing. It would also consistent with the OGI model for governments to offer contracts or prizes for various prosocial applications of AI.
We would need a reason for thinking that this problem is worse in the corporate case in order for it to be a consideration against the OGI model.
Could we get info on this by looking at metrics of corruption? I’m not familiar with the field, but I know it’s been busy recently, and maybe there’s some good papers that put the private and public sectors on the same scale. A quick google scholar search mostly just convinced me that I’d be better served asking an expert.
As for the suggestion that governments (nationally or internationally) should prohibit profit-generating activities by AI labs that have major negative externalities, this is fully consistent with the OGI model
Well, I agree denotationally, but in appendix 4 when you’re comparing OGI with other models, your comparison includes points like “OGI obviates the need for massive government funding” and “agreeable to many incumbents, including current AI company leadership, personnel, and investors”. If governments enact a policy that maintains the ability to buy shares in AI labs, but requires massive government funding and is disagreeable to incumbents, that seems to be part of a different story (and with a different story about how you get trustworthiness, fair distribution, etc.) than the story you’re telling about OGI.
Could we get info on this by looking at metrics of corruption? I’m not familiar with the field, but I know it’s been busy recently, and maybe there’s some good papers that put the private and public sectors on the same scale. A quick google scholar search mostly just convinced me that I’d be better served asking an expert.
I suspect it would be difficult to get much useful signal on this from the academic literature. This particular issue might instead come down to how much you trust the various specific persons that are the most likely corporate AI leaders versus some impression of how trustworthy, wholesome, and wise the key people inside or controlling a government-run AGI program would be (in the U.S. or China, over the coming years).
Btw, I’m thinking of the OGI model as offering something of a dual veto structure—in order for something to proceed, it would have be favored by both the corporation and the host government (in contrast to an AGI Manhattan project, where it would just need to be favored by the government). So at least the potential may exist for there to be more checks and balances and oversight in the corporate case, especially in the versions that involve some sort of very soft nationalization.
your comparison includes points like “OGI obviates the need for massive government funding” … If governments enact a policy that maintains the ability to buy shares in AI labs, but requires massive government funding and is disagreeable to incumbents, that seems to be part of a different story (and with a different story about how you get trustworthiness, fair distribution, etc.) than the story you’re telling about OGI.
In the OGI model, governments have the option to buy shares but also the option not to. It doesn’t require government funding, but if one thinks that it would be good for governments to spend money some on AGI-related stuff then they could do so in the OGI model just as well as in other models—in some countries, maybe even more easily, since e.g. some pension funds and sovereign wealth funds could more easily be used to buy stocks than to be clawed back and used to fund a Manhattan project. Also, I’m imagining that it would be less disagreeable to incumbents (especially key figures in AI labs and their investors) for governments to invest money in their companies than to have their companies shut down or nationalized or outcompeted by a government-run project.
Btw, I’m thinking of the OGI model as offering something of a dual veto structure—in order for something to proceed, it would have be favored by both the corporation and the host government (in contrast to an AGI Manhattan project, where it would just need to be favored by the government). So at least the potential may exist for there to be more checks and balances and oversight in the corporate case, especially in the versions that involve some sort of very soft nationalization.
I didn’t feel like there was a serious enough discussion of why people might not like the status quo.
Corporations even with widely held shares often disproportionately benefit those with more personal ability to direct the corporation. If people are concerned about corporations gaining non-monetary forms of influence, this is a public problem that’s not addressed by the status quo. (A recent example would be xAI biasing Grok toward the US Republican party, which is presumably intended to influence users of their site. A future example is the builders of a superintelligence influencing it to benefit them over other people, including over other shareholders.)
The profit motive inside corporations can be “corrupting”—causing individuals in the corporation to act against the public interest (and sometimes even against the long-term interest of the corporation) through selection, persuasion, or coercion. The tobacco and fossil fuel industries are classic representatives, more modern ones might be in cryptocurrency (harms here mainly involve breaking the law, but we shouldn’t assume that people won’t break the law when incentivized to) or online gambling.
Another model to compare to might be the one proposed in AI For Humanity (Ma, Ong, Tan) - the book as a whole isn’t all that, but the model is a good contribution. It’s something like “international climate policy for AGI.”
Internationally restrict conventional profit-generating activity by AI labs, particularly that with negative downsides (e.g. those that might end up optimizing “against” people [persuasion, optimization for engagement], or those that fuel an unsafe race to superintelligence [imposing both a strict windfall clause, and also going after local incentives like profit from AI agents])
Provide large incentives (e.g. contracts, prizes) for prosocial uses of AI. (The book example is the UN sustainable development goals: clean water, education, preserving nature, no famine, etc. One might try to figure out how to add AI safety or artificial ethics to the set of prosocial uses.)
Tunneling is always a concern in corporate structures, but alternative organizational forms suffer similar problems. Government officials, university department heads, and NGO executives also sometimes misuse the powers of their office to pursue personal or factional interests rather than the official mission of the organization they are supposed to represent. We would need a reason for thinking that this problem is worse in the corporate case in order for it to be a consideration against the OGI model.
As for the suggestion that governments (nationally or internationally) should prohibit profit-generating activities by AI labs that have major negative externalities, this is fully consistent with the OGI model (see section “The other half of the picture”, on p. 4). AGI corporations would be subject to government regulation and oversight, just like other corporations are—and, plausibly, the intensity of government involvement would be much greater in this case, given the potentially transformative impacts of the technology they are developing. It would also consistent with the OGI model for governments to offer contracts or prizes for various prosocial applications of AI.
Could we get info on this by looking at metrics of corruption? I’m not familiar with the field, but I know it’s been busy recently, and maybe there’s some good papers that put the private and public sectors on the same scale. A quick google scholar search mostly just convinced me that I’d be better served asking an expert.
Well, I agree denotationally, but in appendix 4 when you’re comparing OGI with other models, your comparison includes points like “OGI obviates the need for massive government funding” and “agreeable to many incumbents, including current AI company leadership, personnel, and investors”. If governments enact a policy that maintains the ability to buy shares in AI labs, but requires massive government funding and is disagreeable to incumbents, that seems to be part of a different story (and with a different story about how you get trustworthiness, fair distribution, etc.) than the story you’re telling about OGI.
I suspect it would be difficult to get much useful signal on this from the academic literature. This particular issue might instead come down to how much you trust the various specific persons that are the most likely corporate AI leaders versus some impression of how trustworthy, wholesome, and wise the key people inside or controlling a government-run AGI program would be (in the U.S. or China, over the coming years).
Btw, I’m thinking of the OGI model as offering something of a dual veto structure—in order for something to proceed, it would have be favored by both the corporation and the host government (in contrast to an AGI Manhattan project, where it would just need to be favored by the government). So at least the potential may exist for there to be more checks and balances and oversight in the corporate case, especially in the versions that involve some sort of very soft nationalization.
In the OGI model, governments have the option to buy shares but also the option not to. It doesn’t require government funding, but if one thinks that it would be good for governments to spend money some on AGI-related stuff then they could do so in the OGI model just as well as in other models—in some countries, maybe even more easily, since e.g. some pension funds and sovereign wealth funds could more easily be used to buy stocks than to be clawed back and used to fund a Manhattan project. Also, I’m imagining that it would be less disagreeable to incumbents (especially key figures in AI labs and their investors) for governments to invest money in their companies than to have their companies shut down or nationalized or outcompeted by a government-run project.
Interesting, thanks.