Claude Sonnet 4.5 was released yesterday. Anthropic credibly describes it as the best coding, agentic and computer use model in the world
With regards to that self-assessment, I’m going to raise this comment from a previous thread by Raemon.
When I chatted with several anthropic employees at the happy hour a couple months ~year ago, at some point I brought up the “Dustin Moskowitz’s earnest belief was that Anthropic had an explicit policy of not advancing the AI frontier” thing. Some employees have said something like “that was never an explicit commitment. It might have been a thing we were generally trying to do a couple years ago, but that was more like “our de facto strategic priorities at the time”, not “an explicit policy or commitment.”
When I brought it up, the vibe in the discussion-circle was “yeah, that is kinda weird, I don’t know what happened there”, and then the conversation moved on.
I regret that. This is an extremely big deal. I’m disappointed in the other Anthropic folk for shrugging and moving on, and disappointed in myself for letting it happen.
[...] gwern also claims he talked to Dario and came away with this impression [...]
I leave open the possibility that Anthropic conducted a thorough soul-search and breaking this de-facto/promised/implied/possible commitment was considered the best way forward, but the lack of replies there from Anthropic employees (who were quite active in the thread elsewhere) was really unfortunate.
With regards to that self-assessment, I’m going to raise this comment from a previous thread by Raemon.
I leave open the possibility that Anthropic conducted a thorough soul-search and breaking this de-facto/promised/implied/possible commitment was considered the best way forward, but the lack of replies there from Anthropic employees (who were quite active in the thread elsewhere) was really unfortunate.