Basically just the title, see the OAI blog post for more details.
Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.
In a statement, the board of directors said: “OpenAI was deliberately structured to advance our mission: to ensure that artificial general intelligence benefits all humanity. The board remains fully committed to serving this mission. We are grateful for Sam’s many contributions to the founding and growth of OpenAI. At the same time, we believe new leadership is necessary as we move forward. As the leader of the company’s research, product, and safety functions, Mira is exceptionally qualified to step into the role of interim CEO. We have the utmost confidence in her ability to lead OpenAI during this transition period.”
EDIT:
Also, Greg Brockman is stepping down from his board seat:
As a part of this transition, Greg Brockman will be stepping down as chairman of the board and will remain in his role at the company, reporting to the CEO.
The remaining board members are:
OpenAI chief scientist Ilya Sutskever, independent directors Quora CEO Adam D’Angelo, technology entrepreneur Tasha McCauley, and Georgetown Center for Security and Emerging Technology’s Helen Toner.
EDIT 2:
Sam Altman tweeted the following.
i loved my time at openai. it was transformative for me personally, and hopefully the world a little bit. most of all i loved working with such talented people.
will have more to say about what’s next later.
Greg Brockman has also resigned.
Update: Greg Brockman quit.
Update: Sam and Greg say:
Update: three more resignations including Jakub Pachocki.
Update:
Update: Sam is planning to launch something (no details yet).
Update: Sam may return as OpenAI CEO.
Update: Tigris.
Update: talks with Sam and the board.
Update: Mira wants to hire Sam and Greg in some capacity; board still looking for a permanent CEO.
Update: Emmett Shear is interim CEO; Sam won’t return.
Update: lots more resignations (according to an insider).
Update: Sam and Greg leading a new lab in Microsoft.
Update: total chaos.
Perhaps worth noting: one of the three resignations, Aleksander Madry, was head of the preparedness team which is responsible for preventing risks from AI such as self-replication.
Note that Madry only just started, iirc.
Also: Jakub Pachocki who was the director of research
Also seems pretty significant:
The remaining board members are:
Has anyone collected their public statements on various AI x-risk topics anywhere?
Adam D’Angelo via X:
Oct 25
This should help access to AI diffuse throughout the world more quickly, and help those smaller researchers generate the large amounts of revenue that are needed to train bigger models and further fund their research.
Oct 25
We are especially excited about enabling a new class of smaller AI research groups or companies to reach a large audience, those who have unique talent or technology but don’t have the resources to build and market a consumer application to mainstream consumers.
Sep 17
This is a pretty good articulation of the unintended consequences of trying to pause AI research in the hope of reducing risk: [citing Nora Belrose’s tweet linking her article]
Aug 25
We (or our artificial descendants) will look back and divide history into pre-AGI and post-AGI eras, the way we look back at prehistoric vs “modern” times today.
Aug 20
It’s so incredible that we are going to live through the creation of AGI. It will probably be the most important event in the history of the world and it will happen in our lifetimes.
A bit, not shareable.
Helen is an AI safety person. Tasha is on the Effective Ventures board. Ilya leads superalignment. Adam signed the CAIS statement.
For completeness—in addition to Adam D’Angelo, Ilya Sutskever and Mira Murati signed the CAIS statement as well.
Didn’t Sam Altman also sign it?
Yes, Sam has also signed it.
Notably, of the people involved in this, Greg Brockman did not sign the CAIS statement, and I believe that was a purposeful choice.
Also D’Angelo is on the board of Asana, Moskovitz’s company (Moskovitz who funds Open Phil).
Judging from his tweets, D’Angelo seems like significantly not concerned with AI risk, so I was quite taken aback to find out he was on the OpenAI board. This might be misinterpreting his views based on vibes.
I couldn’t remember where from, but I know that Ilya Sutskever at least takes x-risk seriously. I remember him recently going public about how failing alignment would essentially mean doom. I think it was published as an article on a news site rather than an interview, which are what he usually does. Someone with a way better memory than me could find it.
EDIT: Nevermind, found them.
Thanks, edited.
“OpenAI’s ouster of CEO Sam Altman on Friday followed internal arguments among employees about whether the company was developing AI safely enough, according to people with knowledge of the situation.
Such disagreements were high on the minds of some employees during an impromptu all-hands meeting following the firing. Ilya Sutskever, a co-founder and board member at OpenAI who was responsible for limiting societal harms from its AI, took a spate of questions.
At least two employees asked Sutskever—who has been responsible for OpenAI’s biggest research breakthroughs—whether the firing amounted to a “coup” or “hostile takeover,” according to a transcript of the meeting. To some employees, the question implied that Sutskever may have felt Altman was moving too quickly to commercialize the software—which had become a billion-dollar business—at the expense of potential safety concerns.”
Kara Swisher also tweeted:
“More scoopage: sources tell me chief scientist Ilya Sutskever was at the center of this. Increasing tensions with Sam Altman and Greg Brockman over role and influence and he got the board on his side.”
“The developer day and how the store was introduced was in inflection moment of Altman pushing too far, too fast. My bet: [Sam will] have a new company up by Monday.”
Apparently Microsoft was also blindsided by this and didn’t find out until moments before the announcement.
“You can call it this way,” Sutskever said about the coup allegation. “And I can understand why you chose this word, but I disagree with this. This was the board doing its duty to the mission of the nonprofit, which is to make sure that OpenAl builds AGI that benefits all of humanity.” AGI stands for artificial general intelligence, a term that refers to software that can reason the way humans do.
When Sutskever was asked whether “these backroom removals are a good way to govern the most important company in the world?” he answered: “I mean, fair, I agree that there is a not ideal element to it. 100%.”
https://twitter.com/AISafetyMemes/status/1725712642117898654
https://twitter.com/karaswisher/status/1725678898388553901 Kara Swisher @karaswisher
Sources tell me that the profit direction of the company under Altman and the speed of development, which could be seen as too risky, and the nonprofit side dedicated to more safety and caution were at odds. One person on the Sam side called it a “coup,” while another said it was the the right move.
Came across this account via a random lawyer I’m following on Twitter (for investment purposes), who commented, “Huge L for the e/acc nerds tonight”. Crazy times...
I think this makes sense as an incentive for AI acceleration- even if someone is trying to accelerate AI for altruistic reasons e.g. differential tech development (e.g. maybe they calculate that LLMs have better odds of interpretability succeeding because they think in English), then they should still lose access to their AI lab shortly after accelerating AI.
They get so much personal profit from accelerating AI, so only people prepared to personally lose it all within 3 years are prepared to sacrifice enough to do something as extreme as burning the remaining timeline.
I’m generally not on board with leadership shakeups in the AI safety community, because the disrupted alliance webs create opportunities for resourceful outsiders to worm their way in. I worry especially about incentives for the US natsec community to do this. But when I look at it from the game theory/moloch perspective, it might be worth the risk, if it means setting things up so that the people who accelerate AI always fail to be the ones who profit off of it, and therefore can only accelerate because they think it will benefit the world.
Looks like Sam Altman might return as CEO.
OpenAI board in discussions with Sam Altman to return as CEO—The Verge
It seems the sources are supporters of Sam Altman. I have not seen any indication of this from the boards side.
Ok, looks like he was invited in to OpenAIs office for some reason at least https://twitter.com/sama/status/1726345564059832609
This seems to suggest a huge blunder
This is the market itself, not a screenshot! Click one of the “bet” buttons. An excellent feature.
Note: Those are two different markets. Nathan’s market is this one and Sophia Wisdom’s market (currently the largest one by far) is this one.
I expect investors will take the non-profit status of these companies more seriously going forwards.
I hope Ilya et al. realize what they’ve done.
Edit: I think I’ve been vindicated a bit. As I expected money would just flock to for profit AGI labs, as it is poised to right now. I hope OpenAI remains a non profit but I think Ilya played with fire.
So, Meta disbanded its responsible AI team. I hope this story reminds everyone about the dangers of acting rashly.
Firing Sam Altman was really a one time use card.
Microsoft probably threatened to pull its investments and compute which would let Sam Altman new competitor pull ahead regardless as OpenAI would be in an eviscerated state both in terms of funding and human capital. This move makes sense if you’re at the precipice of AGI, but not before that.
Their Responsible AI team was in pretty bad shape after recent lay-offs. I think Facebook just decided to cut costs.
It was anyway weird that they had LeCun in charge and a thing called “Responsible AI team” in the same company. No matter what one thinks about Sam Altman now, compared to LeCun, the things he said about AI risks sounded 100 times more reasonable.
Meta’s actions seem unrelated?
Now he’s free to run for governor of California in 2026:
Prediction market: https://manifold.markets/firstuserhere/will-sam-altman-run-for-the-governo
Aside from obvious questions on how it will impact the alignment approach of OpenAI and whether or not it is a factional war of some sort, I really hope this has nothing to do with Sama’s sister. Both options—”she is wrong but something convinced the OpenAI leadership that’s she’s right” and “she is actually right and finally gathered some proof of her claims”—are very bad. …On the other hand, as cynical and grim as that is, sexual harassment probably won’t spell a disaster down the line, unlike a power struggle among the tops of an AGI-pursuing company.
Speculation on the available info: They must have questioned him on that. Discovering that he was not entirely candid with them would be a good explanation of this announcement. And shadowbanning would be the most discoverable here.
Surely they would use different language than “not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities” to describe a #metoo firing.
Yeah, I also think this is very unlikely. Just had to point out the possibility for completeness sake.
In other news, someone on Twitter (a leaker? not sure) said that there probably will be more firings and that this is a struggle of for-profit vs non-profit sides of the company, with Sama representing the for-profit side.
I think they said that there were more departures to come. I assumed that was referring to people quitting because they disagreed with the decision.
That reminds me of the post we had here a month ago. When I asked how exactly are we supposed to figure out the truth about something that happened in private many years ago, I was told that:
OP is conducting a research; we should wait for the conclusions (should I keep holding my breath?)
we should wait whether other victims come forward, and update accordingly (did we actually?)
Now I wonder whether Less Wrong was used as a part of a character-assassination campaign designed to make people less likely to defend Sam Altman in case of a company takeover. And we happily played along.
(This is unrelated to whether firing Sam Altman was good or bad from the perspective of AI safety.)
How surprising is this to the alignment community professionals (e.g. people at MIRI, Redwood Research, or similar)? From an outside view, the volatility/flexibility and movement away from pure growth and commercialization seems unexpected and could be to alignment researchers’ benefit (although it’s difficult to see the repercussions at this point). While it is surprising to me because I don’t know the inner workings of OpenAI, I’m surprised that it seems similarly surprising to the LW/alignment community as well.
Perhaps the insiders are still digesting and formulating a response, or want to keep hot takes to themselves for other reasons. If not, I’m curious if there is actually so little information flowing between alignment communities and companies like OpenAI such that this would be as surprising as it is to an outsider. For example, there seems to be many people at Anthropic that are directly in or culturally aligned with LW/rationality, and I expected the same to be true to a lesser extend for OpenAI.
I understood there was a real distance between groups, but still, I had a more connected model in my head that is challenged by this news and the response in the first day.
It seems this was a surprise to almost everyone even at OpenAI, so I don’t think it is evidence that there isn’t much information flow between LW and OpenAI.
I’m at CHAI and it’s shocking to me, but I’m not the most plugged-in person.
Someone writes anonymously, “I feel compelled as someone close to the situation to share additional context about Sam and company. . . .”
https://www.reddit.com/r/OpenAI/comments/17xoact/comment/k9p7mpv/
I read their other comments and I’m skeptical. The tone is wrong.
It read like propaganda to me, whether the person works at the company or not.
I wonder what changes will happen after Sam and Greg’s exit.. I Hope they install a better direction towards AI safety.
I expect Sam to open up a new AI company.
Yeah… On one hand, I am excited about Sam and Greg hopefully trying more interesting things than just scaling Transformer LLMs, especially considering Sam’ answer to the last question on Nov. 1 at Cambridge Union, 1:01:45 in https://www.youtube.com/watch?v=NjpNG0CJRMM where he seems to think that more than Transformer-based LLMs are needed for AGI/ASI (in particular, he correctly says that “true AI” must be able to discover new physics, and he doubts LLMs are good enough for that).
On the other hand, I was hoping for a single clear leader in the AI race, and I thought that Ilya Sutskever was one of the best possible leaders for an AI safety project. And now Ilya vs. Sam and Greg Brockman are enemies, https://twitter.com/gdb/status/1725736242137182594, and if Sam and Greg would find a way to beat OpenAI, would they be able to be sufficiently mindful about safety?
Hmmm. The way Sam behaves I can’t see a path of him leading an AI company towards safety. The way I interpreted his world tour (22 countries?) talking about OpenAI or AI in general, is him trying to occupy the mindspaces of those countries. A CEO I wish OpenAI has—is someone who stays at the offices, ensuring that we are on track of safely steering arguably the most revolutionary tech ever created—not trying to promote the company or the tech, I think it’s unnecessary to do a world tour if one is doing AI development and deployment safely.
(But I could be wrong too. Well, let’s all see what’s going to happen next.)
Interesting, how sharply people disagree...
It would be good to be able to attribute this disagreement to a particular part of the comment. Is that about me agreeing with Sam about “True AI” needing to be able to do novel physics? Or about me implicitly supporting the statement that LLMs would not be good enough (I am not really sure, I think LLMs would probably be able to create non-LLMs based AIs, so even if they are not good enough to achieve the level of “True AI” directly, they might be able to get there by creating differently-architected AIs)?
Or about having a single clear leader being good for safety? Or about Ilya being one of the best safety project leaders, based on the history of his thinking and his qualification? Or about Sam and Greg having a fighting chance against OpenAI? Or about me being unsure of them being able to do adequate safety work on the level which Ilya is likely to provide?
I am curious which of these seem to cause disagreement...
I did not press the disagreement button but here is where I disagree:
Do you mean this in the sense that this would be particularly bad safety-wise, or do you mean this in the sense they are likely to just build huge LLMs like everyone else is doing, including even xAI?
I’m still figuring out Elon’s xAI.
But with regards with how Sam behaves—if he doesn’t improve his framing[1] of what AI could be for the future of humanity—I expect the same results.
(I think he frames it with him as the main person that steers the tech rather than an organisation or humanity steering the tech—that’s how it feels for me, the way he behaves.)
They released a big LLM, the “Grok”. With their crew of stars I hoped for a more interesting direction, but an LLM as a start is not unreasonable (one does need a performant LLM as a component).
Yeah… I thought he deferred to Ilya and to the new “superalignment team” Ilya has been co-leading safety-wise...
But perhaps he was not doing that consistently enough...
I haven’t played around with Grok so I’m not sure how capable or safe it is. But I hope Elon and his team of experts gets the safety problem right—as he has created companies with extraordinary achievements. At least, Elon have demonstrated his aspirations to better humanity in other fields of sciences (Internet /Satellites, Space Exploration and EVs) and hope it translate to xAI and twitter.
I felt different about Ilya co-leading, this seems to me that there’s something happening inside OpenAI. When Ilya needed to co-lead the new safety direction this felt like: “something feels weird inside OpenAI and Ilya needed to co-lead the safety direction.” So maybe the announcement today is related to that too.
Pretty sure there will new info from OpenAI next week or two weeks from now. Hoping it favors more safety directions—long term.
I expect safety of that to be at zero (they don’t think GPT-3.5-level LLMs are a problem in this sense; besides they market it almost as an “anything goes, anti-censorship LLM”).
But that’s not really the issue; when a system starts being capable to write code reasonably well, then one starts getting a problem… I hope when they come to that, to approaching AIs which can create better AIs, they’ll start taking safety seriously… Otherwise, we’ll be in trouble...
I thought he was the appropriately competent person (he was probably the AI scientist #1 in the world). The right person for the most important task in the world...
And the “superalignment” team at OpenAI was… not very strong. The original official “superalignment” approach was unrealistic and hence not good enough. I made a transcript of some of his thoughts, https://www.lesswrong.com/posts/TpKktHS8GszgmMw4B/ilya-sutskever-s-thoughts-on-ai-safety-july-2023-a, and it was obvious that his thinking was different from the previous OpenAI “superalignment” approach and much better (as in, “actually had a chance to succeed”)...
Of course, now, since it looks like the “coup” has mostly been his doing, I am less sure that this is the leadership OpenAI and OpenAI safety needs. The manner of that has certainly been too erratic. Safety efforts should not evoke the feel of “last minute emergency”...
At least it refuses to give you instructions for making cocaine.
Well. If nothing else, the sass is refreshing after the sycophancy of all the other LLMs.
That’s good! So, at least a bit of safety fine-tuning is there...
Good to know...
Yeah, let’s see where will they steer Grok.
Yeah I agree with your analysis with the superalignment agenda, I think it’s not a good use of the 20% of compute resources that they have. I even think the resource allocation of 20% on AI safety is not deep enough into the problem as I think a 100% allocation[1] is necessary.
I haven’t had much time studying Ilya, but I like the way he explains his arguments. I hope they (Ilya, the board and Mira or new CEO) will be better at expanding the tech than Sam is. Let’s see.
I think the safest AI will be the most profitable technoloy as everyone will want to promote and build on top of it.
So I guess OpenAI will keep pushing ahead on both safety and capabilities, but not so much on commercialization?
Typical speculations:
Annie Altman charges
Undisclosed financial interests (AGI, Worldcoin, or YC)
Potentially relevant information:
OpenAI insiders seem to also be blindsided and apparently angry at this move.
I personally think there were likely better ways to for Ilya’s faction to get Sam’s faction to negotiate with him, but this firing makes sense based on some reviews of this company having issues with communication as a whole and potentially having a toxic work environment.
edit: link source now available in replies
The human brain seems to be structured such that
Factional lines are often drawn splitting up large groups like corporations, government agencies, and nonprofits, with the lines tracing networks of alliances, and also retaliatory commitments that are often used to make factions and individuals hardened against removal by rivals.
People are nonetheless occasionally purged along these lines rather than more efficient decision theory like values handshakes.
These conflicts and purges are followed by harsh rhetoric, since people feel urges to search languagespace and find combinations of words that optimize for retaliatory harm against others.
I would be very grateful for sufficient evidence that the new leadership at OpenAI is popular or unpopular among a large portions of the employees, rather than a small number of anonymous people who might have been allied to the purged people.
I think it might be better to donate that info e.g. message LW mods via the intercom feature in the lower right corner, than to post it publicly.
There are certainly factions in most large groups, with in-conflict, but this sort of coup is unprecedented. I think in the majority of cases, factions tend to cooperate or come to a resolution. If factions couldn’t cooperate, most corporations would be fairly dysfunctional. If the solution was a coup, governments would be even more dysfunctional.
This is public information, so is there a particular reason I should have not posted it?
Can you please link to it or say what app or website this is?
Here it is:
“Sam Altman’s reputation among OpenAI researchers (Tech Industry)” https://www.teamblind.com/us/s/Ji1QX120
Can someone from OpenAI anonymously spill the 🍵?
Not from OpenAI but the language sounds like this could be the board protecting themselves against securities fraud committed by Altman.
What kind of securities fraud could he have committed?
I’m just a guy but the impression I get from occasionally reading the Money Stuff newsletter is that basically anything bad you do at a public company is securities fraud, because if you do a bad thing and don’t tell investors, then people who buy the securities you offer are doing so without full information because of you.
I doubt the reason for his ousting was fraud-related, but if it was I think it’s unlikely to be viewed as securities fraud simply because OpenAI hasn’t issued any public securities. I’m not a securities lawyer, but my hunch is even if you could prosecute Altman for defrauding e.g. Microsoft shareholders, it would be far easier to sue directly for regular fraud.
MSFT market cap dropped about $40B in a 15 minute period on the news, so maybe someone can argue securities fraud on that basis? I dunno, I look forward to the inevitable Matt Levine article.
A wild (probably wrong) theory: Sam Altman announcing custom gpts was the thing that pushed the board to fire him.
customizable ai → user can override rlhf (maybe, probably) → we are at risk from AIs that have been finetunrd by bad actors
If he was fired for some form of sexual misconduct, we wouldn’t change our views on AI risk. But the betting seems to be that it wasn’t that.
On the other hand, if the reason for his firing was something like he had access to a concerning test result, and was concealing it from the board and the government (illegal as per the executive order) then we’re going to worry about what that test result was, and how bad it is for AI risk.
Worst case: this is a AI preventing itself from being shutdown, by getting the board members sympathetic to itself to fire the board members most likely to shut it down. (The “surely you could just switch it off” argument is lacking in imagination as to how an agi could prevent shutdown). Personally, low probabilty that it’s this option.