In a blogpost posted yesterday, Sam Altman writes about the possibility of needing to coordinate with governments and other labs before proceeding further (emphasis mine):
We expect there will be periods where we need to collaborate with governments, international agencies, and other AGI efforts to ensure that we have sufficiently solved serious alignment, safety, or societal problems before proceeding further with our work.
This comes one month after Stop The AI Race’s March 21st protest in front of OpenAI, Anthropic and xAI (which I organized), asking Sam Altman (alongside other CEOs) to make a statement on pausing frontier AI development (conditionally), and a follow-up direct message on March 25th asking Sam Altman to clarify his take on conditionally pausing AI. (The Musk v. Altman trial also begins today, which may be relevant to the timing.)
Other parts of the blogpost also point towards more coordination with other labs and governments:
“we need to ensure that key decisions about AI are made via democratic processes and with egalitarian principles, and not just made by AI labs.”
″AI will introduce new risks, and we will work with other companies, ecosystems, governments, and society to solve them. “
”No AI lab can ensure a good future alone. For an obvious example, there may be extremely capable models that make it easier to create a new pathogen, and we need a society-wide approach to defend against this with pathogen-agnostic countermeasures.”
This is probably net good, but without any organizational changes I don’t think it means much at all. Sam can and does just make a promise, and then “change his mind” as soon as it’s convenient. There’s an angle where this comment pushes OpenAI in a particular direction (and OpenAI has more inertia than Sam alone) and another angle where it makes him look bad when he goes back on this, but I don’t think you can or should read much intent into his words at all other than “I think the next interaction I have will go well if I say that I’m going to cooperate with democracies”.
asserts that pausing AI development may be necessary
This is much, much stronger language than the segment you quote right after it. He’s not doing anything like ‘asserting’, he doesn’t mention a pause, he doesn’t use the word necessary, and it’s not clear that ‘do x before proceeding’ means ‘stop for n years to work this out’ rather than ‘ make our case by fiat to tech illiterate bureaucrats who wouldn’t know if, how, or why to implement a real pause, while the work basically continues in the background.’
It’s plausible they view much of their existing comms work as meeting something like the bar described in this post.
This is a much bigger stretch than Demis at Davos (which is itself often exaggerated).
I do agree that “asserts” was too strong. Changed to “writes about the possibility of needing to coordinate with governments and other labs before proceeding further” to stay closer to the quote.
That said, I still think interpreting the “before” part as a statement about pausing (even for a short time) is a reasonable interpretation.
In a blogpost posted yesterday, Sam Altman writes about the possibility of needing to coordinate with governments and other labs before proceeding further (emphasis mine):
This comes one month after Stop The AI Race’s March 21st protest in front of OpenAI, Anthropic and xAI (which I organized), asking Sam Altman (alongside other CEOs) to make a statement on pausing frontier AI development (conditionally), and a follow-up direct message on March 25th asking Sam Altman to clarify his take on conditionally pausing AI. (The Musk v. Altman trial also begins today, which may be relevant to the timing.)
Other parts of the blogpost also point towards more coordination with other labs and governments:
This is probably net good, but without any organizational changes I don’t think it means much at all. Sam can and does just make a promise, and then “change his mind” as soon as it’s convenient. There’s an angle where this comment pushes OpenAI in a particular direction (and OpenAI has more inertia than Sam alone) and another angle where it makes him look bad when he goes back on this, but I don’t think you can or should read much intent into his words at all other than “I think the next interaction I have will go well if I say that I’m going to cooperate with democracies”.
This is much, much stronger language than the segment you quote right after it. He’s not doing anything like ‘asserting’, he doesn’t mention a pause, he doesn’t use the word necessary, and it’s not clear that ‘do x before proceeding’ means ‘stop for n years to work this out’ rather than ‘ make our case by fiat to tech illiterate bureaucrats who wouldn’t know if, how, or why to implement a real pause, while the work basically continues in the background.’
It’s plausible they view much of their existing comms work as meeting something like the bar described in this post.
This is a much bigger stretch than Demis at Davos (which is itself often exaggerated).
I do agree that “asserts” was too strong. Changed to “writes about the possibility of needing to coordinate with governments and other labs before proceeding further” to stay closer to the quote.
That said, I still think interpreting the “before” part as a statement about pausing (even for a short time) is a reasonable interpretation.