I don’t think this is a fair consideration of the article’s entire message. This line from the article specifically calls out slowing down AI progress:
we could collectively agree (with the backing power of a new organization like the one suggested below) that the rate of growth in AI capability at the frontier is limited to a certain rate per year.
Having spent a long time reading through OpenAI’s statements, I suspect that they are trying to strike a difficult balance between:
A) Doing the right thing by way of AGI safety (including considering options like slowing down or not releasing certain information and technology).
B) Staying at or close to the lead of the race to AGI, given they believe that is the position from which they can have the most positive impact in terms of changing the development path and broader conversation around AGI.
Instrumental goal (B) is in tension (but not necessarily stark conflict, depending on how things play out) with ultimate goal (A).
What they’re presenting here in this article are ways to potentially create situation where they could slow down and be confident that doing so wouldn’t actually lead to worse eventual outcomes for AGI safety. They are also trying to promote and escalate the societal conversation around AGI x-risk.
While I think it’s totally valid to criticise OAI on aspects of their approach to AGI safety, I think it’s also fair to say that they are genuinely trying to do the right thing and are simply struggling to chart what is ultimately a very difficult path.
There have been some strong criticisms of this statement, notably by Jeremy Howard et al here. I’ve written a detailed response to the criticisms here:
https://www.soroushjp.com/2023/06/01/yes-avoiding-extinction-from-ai-is-an-urgent-priority-a-response-to-seth-lazar-jeremy-howard-and-arvind-narayanan/
Please feel free to share with others who may find it valuable (e.g. skeptics of AGI x-risk).