Altman has already signed the CAIS Statement on AI Risk (“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”), but OpenAI’s actions almost exclusively exacerbate extinction risk, and nowadays Altman and OpenAI even downplay the very existence of this risk.
I generally agree. But I think this does not invalidate the whole strategy—the call to action in this statement was particularly vague, I think there is ample room for much more precise statements.
My point was that Altman doesn’t adhere to vague statements, and he’s a known liar and manipulator, so there’s no reason to believe his word would be worth any more in concrete statements.
Altman has already signed the CAIS Statement on AI Risk (“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”), but OpenAI’s actions almost exclusively exacerbate extinction risk, and nowadays Altman and OpenAI even downplay the very existence of this risk.
I generally agree. But I think this does not invalidate the whole strategy—the call to action in this statement was particularly vague, I think there is ample room for much more precise statements.
My point was that Altman doesn’t adhere to vague statements, and he’s a known liar and manipulator, so there’s no reason to believe his word would be worth any more in concrete statements.