My claim that Anthropic is the only model the military entrusts for using classified systems is based on the fact that the article I linked says “Anthropic’s Claude is the only AI model currently available in the military’s classified systems” (and this claim has been corroborated by other reporting on the topic that seems to have done original digging). This article goes into more detail.
Tom Smith
Tom Smith’s Shortform
The government implied that OpenAI, GDM, and xAI will allow their models to be used for mass surveillance of Americans. Are they right?
The Department of War is trying to pressure Anthropic to allow their models to be used “to spy on Americans en masse, or to develop weapons that fire with no human involvement”. Secretary of War Pete Hegseth is reportedly “close” to having the military refuse to do business with any company that doesn’t cut ties with Anthropic. A senior Pentagon official says he wants to “make sure they pay a price for forcing our hand like this.” (Source: Axios)
Right now Claude is the only model that the military entrusts for use in classified systems, but soon they’ll presumably switch to another company if Anthropic doesn’t back down.
The article states
A senior administration official said the Pentagon is confident the other three [OpenAI, Google, and xAI] will agree to the “all lawful use” standard. But a source familiar with those discussions said much is still undecided.”
So it sounds like the government is, as a pressure tactic, implying OpenAI, Google, and xAI will roll over and let their models be used to surveil Americans and autonomously kill people.
Is this true? I assume OpenAI, Google, and xAI employees wouldn’t stand for this. Can OpenAI, Google, and xAI comment on if they will allow their models to be used to surveil Americans en masse or to autonomously kill people without safeguards (esp. measures to ensure they’re not used against Americans)?
It’s looking like any AI that anyone in the US builds will be used for whatever purpose the government wants and modified to meet the government’s needs.
Anthropic refused to let the Department of War use their models to spy on Americans en masse or autonomously kill people, and for the past week the DoW has been trying to pressure them to change that.
Today the Department of War said “If Anthropic doesn’t let them use their models for any legal purpose the Pentagon wants (“all lawful use”) the Pentagon will either cut ties and declare Anthropic a ‘supply chain risk,’ or invoke the Defense Production Act to force the company to tailor its model to the military’s needs.”[1] The DoW gave Dario until Friday to make a decision.
The former would prevent any company that does business with the DoW from using Anthropic products,[2] something normally reserved for foreign adversaries, and has been threatened for a week. The latter is new, and IMO it’s a big deal.
It would mean no matter what Anthropic does, they can’t control how the DoW uses their models. It’s somewhat ambiguous whether this would be a legal use of the DPA; it’s normally reserved for hardware not software. It’s also not clear to me what it would meant to have Anthropic “tailor its model” to DPA usage.
People at AI companies often assume if they trust the leadership of their company, then they don’t have to worry about egregious misuse. But if they develop the technology, they can’t stop the government form getting their hands on it. And this incident is evidence the government is very willing to take extreme measures to get their hands on it and that they intent to use it for things like spying on Americans en masse.
I expected at some point when AI was very powerful governments would try to nationalize it, but I didn’t expect this kind of action when the technology was this early along in development, when it was very far from posing a decisive strategic advantage.
It’s important to bear in mind they are probably trying to sound extra scary to pressure Anthropic and other AI developers in this negotiation. Though they also framed this as an ultimatum and didn’t give themselves much room to back down.
Perhaps from using them at all for anything, perhaps from using them to fulfill that specific contract. The details are murky. Per The Verge:
“This could be implemented in a very narrow sense — or an extremely broad one. ‘I suspect the more logical explanation would be the narrower definition, that Anthropic can’t be used as part of a specific statement of work for the Pentagon,’ said Gertz. ‘But based on some of the reporting and effort to make this seem like a punitive move against Anthropic, it’s worth thinking through both of those scenarios.’ ”