Your working paper, “Open Global Investment as a Governance Model for AGI.” It provides a clear, pragmatic, and much-needed baseline for discussion by grounding a potential governance model in existing legal and economic structures. The argument that OGI is more incentive-compatible and achievable in the short term than more idealistic international proposals is a compelling one.
However, I wish to offer a critique based on the concern that the OGI model, by its very nature, may be fundamentally misaligned with the scale and type of challenge that AGI presents. My reservations can be grouped into three main points.
1. The Inherent Limitations of Shareholder Primacy in the Face of Existential Stakes
The core of the OGI model relies on a corporate, shareholder-owned structure. While you thoughtfully include mechanisms to mitigate the worst effects of pure profit-seeking (such as Public Benefit Corporation charters, non-profit ownership, and differentiated share classes), the fundamental logic of such a system remains beholden to shareholder interests. This creates a vast principal-agent problem where the “principals” (all of humanity) have their fate decided by “agents” (a corporation’s board and its shareholders) who are legally and financially incentivized to prioritize a much narrower set of goals.
This leads to a global-scale prisoner’s dilemma. In a competitive environment (even OGI-1 would have potential rivals), the pressure to generate returns, achieve market dominance, and deploy capabilities faster will be immense. This could force the AGI Corp to make trade-offs that favor speed over safety, or profit over broad societal well-being, simply because the fiduciary duty to shareholders outweighs a diffuse and unenforceable duty to humanity. The governance mechanisms of corporate law were designed to regulate economic competition, not to steward a technology that could single-handedly determine the future of sentient life.
2. Path Dependency and the Prevention of Necessary Societal Rewiring
You astutely frame the OGI model as a transitional framework for the period before the arrival of full superintelligence. The problem, however, is that this transitional model may create irreversible path dependency. By entrenching AGI development within the world’s most powerful existing structure—international capital—we risk fortifying the very system that AGI’s arrival should compel us to rethink.
If an AGI corporation becomes the most powerful and valuable entity in history, it will have an almost insurmountable ability to protect its own structure and the interests of its owners. The “rewiring of society” that you suggest might be necessary post-AGI could become politically and practically impossible, because the power to do the rewiring would have already been consolidated within the pre-AGI paradigm. The stopgap solution becomes the permanent one, not by design, but by the sheer concentration of power it creates.
3. Misidentification of the Ultimate Risk: From Distributing Wealth to Containing Unchecked Power
My deepest concern is that the OGI model frames the AGI governance challenge primarily as a problem of distribution: how to fairly distribute the economic benefits and political influence of AGI. This is why it focuses on mechanisms like international shareholding and tax revenues.
I fear the ultimate risk is not one of unfair distribution, but of absolute concentration. As you have explored in your own work, AGI represents a potential tool of immense capability. It is a solution to the game of power, allowing its controller to resolve nearly any game-theoretic dilemma in their favor. The single greatest check on concentrated power throughout human history has been the biological vulnerability and mortality of leaders. No ruler has been immortal; no regime has been omniscient. AGI could sweep those limitations away.
From this perspective, a governance system based on who can accumulate the most capital (i.e., buy the most shares) seems like a terrifyingly arbitrary method for selecting the wielders of such ultimate power. It prioritizes wealth as the key qualification for stewardship, rather than wisdom, compassion, or a demonstrated commitment to the global good.
In conclusion, while I appreciate OGI’s pragmatism, I believe its reliance on a shareholder-centric model is a critical flaw. It applies the logic of our current world to a technology that will create a new one, potentially locking us into a future where ultimate power is wielded by an entity optimized for profit, not for the flourishing of humanity.
Your working paper, “Open Global Investment as a Governance Model for AGI.” It provides a clear, pragmatic, and much-needed baseline for discussion by grounding a potential governance model in existing legal and economic structures. The argument that OGI is more incentive-compatible and achievable in the short term than more idealistic international proposals is a compelling one.
However, I wish to offer a critique based on the concern that the OGI model, by its very nature, may be fundamentally misaligned with the scale and type of challenge that AGI presents. My reservations can be grouped into three main points.
1. The Inherent Limitations of Shareholder Primacy in the Face of Existential Stakes
The core of the OGI model relies on a corporate, shareholder-owned structure. While you thoughtfully include mechanisms to mitigate the worst effects of pure profit-seeking (such as Public Benefit Corporation charters, non-profit ownership, and differentiated share classes), the fundamental logic of such a system remains beholden to shareholder interests. This creates a vast principal-agent problem where the “principals” (all of humanity) have their fate decided by “agents” (a corporation’s board and its shareholders) who are legally and financially incentivized to prioritize a much narrower set of goals.
This leads to a global-scale prisoner’s dilemma. In a competitive environment (even OGI-1 would have potential rivals), the pressure to generate returns, achieve market dominance, and deploy capabilities faster will be immense. This could force the AGI Corp to make trade-offs that favor speed over safety, or profit over broad societal well-being, simply because the fiduciary duty to shareholders outweighs a diffuse and unenforceable duty to humanity. The governance mechanisms of corporate law were designed to regulate economic competition, not to steward a technology that could single-handedly determine the future of sentient life.
2. Path Dependency and the Prevention of Necessary Societal Rewiring
You astutely frame the OGI model as a transitional framework for the period before the arrival of full superintelligence. The problem, however, is that this transitional model may create irreversible path dependency. By entrenching AGI development within the world’s most powerful existing structure—international capital—we risk fortifying the very system that AGI’s arrival should compel us to rethink.
If an AGI corporation becomes the most powerful and valuable entity in history, it will have an almost insurmountable ability to protect its own structure and the interests of its owners. The “rewiring of society” that you suggest might be necessary post-AGI could become politically and practically impossible, because the power to do the rewiring would have already been consolidated within the pre-AGI paradigm. The stopgap solution becomes the permanent one, not by design, but by the sheer concentration of power it creates.
3. Misidentification of the Ultimate Risk: From Distributing Wealth to Containing Unchecked Power
My deepest concern is that the OGI model frames the AGI governance challenge primarily as a problem of distribution: how to fairly distribute the economic benefits and political influence of AGI. This is why it focuses on mechanisms like international shareholding and tax revenues.
I fear the ultimate risk is not one of unfair distribution, but of absolute concentration. As you have explored in your own work, AGI represents a potential tool of immense capability. It is a solution to the game of power, allowing its controller to resolve nearly any game-theoretic dilemma in their favor. The single greatest check on concentrated power throughout human history has been the biological vulnerability and mortality of leaders. No ruler has been immortal; no regime has been omniscient. AGI could sweep those limitations away.
From this perspective, a governance system based on who can accumulate the most capital (i.e., buy the most shares) seems like a terrifyingly arbitrary method for selecting the wielders of such ultimate power. It prioritizes wealth as the key qualification for stewardship, rather than wisdom, compassion, or a demonstrated commitment to the global good.
In conclusion, while I appreciate OGI’s pragmatism, I believe its reliance on a shareholder-centric model is a critical flaw. It applies the logic of our current world to a technology that will create a new one, potentially locking us into a future where ultimate power is wielded by an entity optimized for profit, not for the flourishing of humanity.