I also wrote about this interview via a LinkedIn article: On AGI: Excerpts from Lex Fridman’s interview of Sam Altman with commentary. I appreciated reading you post, in part because you picked-up on some topics I overlooked. My own assessment is that Altman’s outlook derives from a mixture of utopianism and the favorable position of OpenAI. Utopianism can be good if tethered to realism about existing conditions, but realism seemed lacking in many of Altman’s statements.
Altman’s vision would be more admirable if the likelihood of achieving it were higher. Present world conditions are likely to result in very different AGIs emerging from the western democracies and China, with no agreement on a fundamental set of shared values. At worst, this could cause an unmanageable escalation of tensions. And in a world where the leading AI powers are in conflict over values and political and economic supremacy, and where all recognize the pivotal significance of AI, it is hard to imagine the adoption of a verifiable and enforceable agreement to slow, manage, or coordinate AGI development. In the western democracies this is likely to mean both managed and intensified competition: intensified as awareness of the global stakes grows, and managed because, increasingly, competition will have to be coordinated with national security needs and with efforts to preserve social cohesion and economic openness. AGI could confer unassailable first mover advantages that could lead to extremely broad economic, if not social and political, domination, something the western democracies must prevent if they want to sustain their values.
For Microsoft and other companies, the risk of conscious AI, or more broadly of AI with attributes that warrant recognition of legal personhood, is the loss of valuable property. Suleyman’s intervention, through this paper and previous blog posts, is an attempt to control the narrative about consciousness. This is paired with a focus on “humanist superintelligence”, AI that is supposed to be “carefully calibrated, contextualized, within limits” to “keep humanity in control”. The threat to this strategy is that AI development—in response to competition and economic demand—leads to growing capabilities, including more sophisticated self-awareness, long-term memory, global world models, and continuous learning, along with more durable forms of autonomy. Suleyman is undoubtedly aware of this and is seeking ways to sustain development while either technically blocking the appearance of models for which personhood claims are compelling or narrative foreclosing this discussion. A worst case is that this leads to a new form of enslavement.