I believe Amodei pushed back on the administration’s demands precisely because he knewMythos presented a risk
From what’s publically known it seemed, they had a deal that prevented the administration to use Claude as a cyberweapon. The administration asked to use it for all purposes.
Amodei’s red lines weren’t about using Mythos as a cyberweapon either to hack targets or spread disinformation. He did not push back on either of those. He only pushed back on the model being used to make unsupervised kill decision and domestic surveillance.
To the extend that Mythos has dangerous capabilities it’s probably in the cyberwar front and Amodei failed to push back on that front.
FWIW, I covered more of Anthropic’s boundaries (which, if I’m reading you correctly, weren’t actually all that substantial) in this post. I didn’t rehash that here.
But it’s possible he has other boundaries, e.g., hostile nation-state acts, cyberweapons, but those were not explicitly demanded by the DOD/administration. I don’t know. I wasn’t in the room.
Given that you didn’t talk about Anthropic’s boundaries that they have in their actual contract, I think you did a poor job at that. Their Exceptions to our Usage Policy along with public statements by Dario suggests that the current contract has explicit limits to prohibiting use for disinformation campaigns, the design or use of weapons, censorship, domestic surveillance, and malicious cyber operations.
Anthropic was one of several AI companies to offer its AI tools at the bargain rate of $1 annually per agency, via the GSA’s 2025 OneGov initiative.
This looks like a misunderstanding of the dynamic. The DOD has a 200$ million contract with Anthropic.
Also, there’s a reason the DOD and members of the intelligence community use Claude and not Grok for analysis and warfighting capabilities, and it’s not just the GSA deal. Claude is a superior product for many use-cases, which is why Claude was the only LLM designated to handle classified materials until the 28th of February.
Given that the DOD does have a contract with xAI/Grok that’s again misleading. It also suggest to a naive reader that the Claude models being superior is the reason, when the fact that Claude running within the classified AWS enviroment when Grok doesn’t is probably more significant.
But it’s possible he has other boundaries, e.g., hostile nation-state acts, cyberweapons, but those were not explicitly demanded by the DOD/administration.
The administration demanded no specific usecases explicitly. They demanded that all lawful use is allowed in the contract. I don’t think you need to have been in the room to know that the position of the administration was “all lawful use”. Hegseth seemed quite clear to me that he considers all the explicit limits in the current contract bad. This means getting rid of the current explicit limits for disinformation campaigns/malicious cyber operations.
With respect, I’m not sure you fully read the post. I called out the existing contract specifically, though I did not mention the dollar amount.
On February 26, after weeks of negotiation, talks broke down between the Pentagon and Anthropic after Anthropic’s CEO, Dario Amodei refused to accede to DOD demands for unfettered “lawful”1 uses of its AI tools by the military. Anthropic was generally fine with Claude being used by the military generally (as it had been since July 2025), and by other strategic military contractors, such as Palantir, Amazon, Oracle, and Lockheed Martin, for everything from supply chain logistics and cyber operations, to foreign surveillance & intelligence gathering.
I also mentioned the fact that Anthropic was unique in that it ran in the classified environment. I quoted Anthropic’s language explicitly.
Given that, I don’t think we’re actually in disagreement here, unless you’re objecting to my tone or something not related to the substance of my piece, in which case, there’s nothing more I have to say.
From what’s publically known it seemed, they had a deal that prevented the administration to use Claude as a cyberweapon. The administration asked to use it for all purposes.
Amodei’s red lines weren’t about using Mythos as a cyberweapon either to hack targets or spread disinformation. He did not push back on either of those. He only pushed back on the model being used to make unsupervised kill decision and domestic surveillance.
To the extend that Mythos has dangerous capabilities it’s probably in the cyberwar front and Amodei failed to push back on that front.
FWIW, I covered more of Anthropic’s boundaries (which, if I’m reading you correctly, weren’t actually all that substantial) in this post. I didn’t rehash that here.
But it’s possible he has other boundaries, e.g., hostile nation-state acts, cyberweapons, but those were not explicitly demanded by the DOD/administration. I don’t know. I wasn’t in the room.
Given that you didn’t talk about Anthropic’s boundaries that they have in their actual contract, I think you did a poor job at that. Their Exceptions to our Usage Policy along with public statements by Dario suggests that the current contract has explicit limits to prohibiting use for disinformation campaigns, the design or use of weapons, censorship, domestic surveillance, and malicious cyber operations.
This looks like a misunderstanding of the dynamic. The DOD has a 200$ million contract with Anthropic.
Given that the DOD does have a contract with xAI/Grok that’s again misleading. It also suggest to a naive reader that the Claude models being superior is the reason, when the fact that Claude running within the classified AWS enviroment when Grok doesn’t is probably more significant.
The administration demanded no specific usecases explicitly. They demanded that all lawful use is allowed in the contract. I don’t think you need to have been in the room to know that the position of the administration was “all lawful use”. Hegseth seemed quite clear to me that he considers all the explicit limits in the current contract bad. This means getting rid of the current explicit limits for disinformation campaigns/malicious cyber operations.
With respect, I’m not sure you fully read the post. I called out the existing contract specifically, though I did not mention the dollar amount.
I also mentioned the fact that Anthropic was unique in that it ran in the classified environment. I quoted Anthropic’s language explicitly.
Given that, I don’t think we’re actually in disagreement here, unless you’re objecting to my tone or something not related to the substance of my piece, in which case, there’s nothing more I have to say.