Sam Altman is almost certainly aware of the arguments and just doesn’t agree with them. The OpenAI emails are helpful for background on this, but at least back when OpenAI was founded, Elon Musk seemed to take AI safety relatively seriously.
Elon Musk to Sam Teller—Apr 27, 2016 12:24 PM
History unequivocally illustrates that a powerful technology is a double-edged sword. It would be foolish to assume that AI, arguably the most powerful of all technologies, only has a single edge.
The recent example of Microsoft’s AI chatbot shows how quickly it can turn incredibly negative. The wise course of action is to approach the advent of AI with caution and ensure that its power is widely distributed and not controlled by any one company or person.
That is why we created OpenAI.
They also had a specific AI safety team relatively early on, and mention explicitly the reasons in these emails:
Put increasing effort into the safety/control problem, rather than the fig leaf you’ve noted in other institutions. It doesn’t matter who wins if everyone dies. Related to this, we need to communicate a “better red than dead” outlook — we’re trying to build safe AGI, and we’re not willing to destroy the world in a down-to-the-wire race to do so.
Main concern right now is very much lab proliferation, ensuing coordination problems, and disagreements / adversarial communication / overall insane and polarized discourse.
Google Deepmind: They are older than OpenAI. They also have a safety team. They are very much aware of the arguments. I don’t know about Musk’s impact on them.
Anthropic: They split from OpenAI. To my best guess, they care about safety at least roughly as much as them. Many safety researchers have been quitting OpenAI to go work for Anthropic over the past few years.
xAI: Founded by Musk several years after he walked out from OpenAI. People working there have previously worked at other big labs. General consensus seems to be that their alignment plan (as least as explained by Elon) is quite confused.
SSI: Founded by Ilyia Sutskever after he walked out from OpenAI, which he did after participating in a failed effort to fire Sam Altman from OpenAI. Very much aware of the arguments.
Meta AI: To the best of my knowledge, aware of the arguments but very dismissive of them (at least at the upper management levels).
Mistral AI: I don’t know much but probably more or less the same or worse than Meta AI.
Chinese labs: No idea. I’ll have to look into this.
I am confident that there are relatively influential people within Deepmind and Anthropic who post here and/or on the Aligment Forum. I am unsure about people from other labs, as I am nothing more than a relatively well-read outsider.
Sam Altman is almost certainly aware of the arguments and just doesn’t agree with them. The OpenAI emails are helpful for background on this, but at least back when OpenAI was founded, Elon Musk seemed to take AI safety relatively seriously.
They also had a specific AI safety team relatively early on, and mention explicitly the reasons in these emails:
They also explicitly reference this Slate Star Codex article, and I think Elon Musk follows Eliezer’s twitter.
Has Musk tried to convince the other AI companies to also worry about safety?
Main concern right now is very much lab proliferation, ensuing coordination problems, and disagreements / adversarial communication / overall insane and polarized discourse.
Google Deepmind: They are older than OpenAI. They also have a safety team. They are very much aware of the arguments. I don’t know about Musk’s impact on them.
Anthropic: They split from OpenAI. To my best guess, they care about safety at least roughly as much as them. Many safety researchers have been quitting OpenAI to go work for Anthropic over the past few years.
xAI: Founded by Musk several years after he walked out from OpenAI. People working there have previously worked at other big labs. General consensus seems to be that their alignment plan (as least as explained by Elon) is quite confused.
SSI: Founded by Ilyia Sutskever after he walked out from OpenAI, which he did after participating in a failed effort to fire Sam Altman from OpenAI. Very much aware of the arguments.
Meta AI: To the best of my knowledge, aware of the arguments but very dismissive of them (at least at the upper management levels).
Mistral AI: I don’t know much but probably more or less the same or worse than Meta AI.
Chinese labs: No idea. I’ll have to look into this.
I am confident that there are relatively influential people within Deepmind and Anthropic who post here and/or on the Aligment Forum. I am unsure about people from other labs, as I am nothing more than a relatively well-read outsider.