Defense Secretary Pete Hegseth is “close” to cutting business ties with Anthropic and designating the AI company a “supply chain risk” — meaning anyone who wants to do business with the U.S. military has to cut ties with the company, a senior Pentagon official told Axios.
The senior official said: “It will be an enormous pain in the ass to disentangle, and we are going to make sure they pay a price for forcing our hand like this.”
Why it matters: That kind of penalty is usually reserved for foreign adversaries. ... Anthropic is prepared to loosen its current terms of use, but wants to ensure its tools aren’t used to spy on Americans en masse, or to develop weapons that fire with no human involvement. The Pentagon claims that’s unduly restrictive, and that there are all sorts of gray areas that would make it unworkable to operate on such terms. Pentagon officials are insisting in negotiations with Anthropic and three other big AI labs — OpenAI, Google and xAI — that the military be able to use their tools for “all lawful purposes.”
Unless I’m badly mistaken, these are limitations from Anthropic’s standard terms of service, not something recently introduced. So ‘pay a price for forcing our hand like this’ seems misleading at best—presumably the Pentagon read the terms before choosing to sign the contract and incorporating Claude into its processes[3].
This conflict was ostensibly triggered by Anthropic asking about the use of Claude in the Maduro raid[4]. Senior administration official: “Anthropic asked whether their software was used for the raid to capture Maduro, which caused real concerns across the Department of War indicating that they might not approve if it was...Any company that would jeopardize the operational success of our warfighters in the field is one we need to reevaluate our partnership with going forward.”
Axios is correct that designating Anthropic a supply chain risk would be an extremely unusual step; all previous uses of that designation have been for foreign companies with suspected ties to foreign intelligence services (eg Huawei).
Estimates from Opus-4.6 and ChatGPT-5.2 suggest that designating Anthropic a supply chain risk would cost them something like 5% of revenue. This does not include a couple of potential major additional impacts: a) spread from DoD contractors to all federal contractors (I haven’t tried to estimate what this would cost them) and b) impact on Anthropic’s valuation for their intended IPO later this year, which would be ~30x whatever the direct revenue impact turns out to be.
So this looks like a small but real step in the direction of soft nationalization as described by Cheng & Katzke, where the government uses a range of levers to exercise control over the choices of private companies. To be clear: the military choosing not to use Anthropic would not qualify in my view, but taking the highly unusual step of declaring them a supply chain risk (such that ‘anyone who wants to do business with the U.S. military has to cut ties with the company’) does. Of course, it’s possible that this is an empty threat and the Pentagon wouldn’t follow through, but unless Anthropic has reason to believe that, it’s effective either way.
I’m not planning to do further analysis of this issue, but I encourage others to.
The aspect of all this that I’m most uncertain about is the details of whether the negotiation is about changing the terms of an existing contract vs terms for a future contract. Normally I would expect terms to be agreed upon before signing a contract. But the current two-year contract, signed in July 2025, is an OTA (‘Other Transaction Agreement’), which is a type of prototype agreement which, as I understand it, is less set in stone than the more typical ‘FAR’ contracts. My understanding of US federal government procurement processes is pretty limited, and as far as I can tell the text of the contract isn’t publicly available[5]. But still, surely these terms weren’t a surprise to the Pentagon? If you have reason to think they were, please comment! That would shift my thinking on the degree to which this threat represents a step toward soft nationalization rather than DoD being pissed off that Anthropic is springing terms on them unexpectedly.
Or perhaps that was a pretext. Per Axios, ’An Anthropic spokesperson denied that: “Anthropic has not discussed the use of Claude for specific operations with the Department of War. We have also not discussed this with any industry partners, including Palantir, outside of routine discussions on strictly technical matters.”
WSJ: ‘Some administration officials were frustrated that the company was dictating how its technology could be used’ have they literally not encountered terms of service before lol
Unless I’m badly mistaken, these are limitations from Anthropic’s standard terms of service, not something recently introduced.
When Anthropic did their deal with the military they granted that there are explicit expectations where the military can violate the terms of service and Anthropic can give the military additional expectations that are given in classified documents so that the Anthropic can tell no one including most of their employees who don’t have security clearances about the new expectations.
The deal also did not contain the right of Anthropic to have any knowledge about how it’s software will be used and no enforcement mechanism.
The deal probably looked to the military like it was structured to allow the military to do what it wants while Anthropic can pretend to it’s employees without security clearance and the public that they do things to limit the usage which is fine for the military.
Pete Hegseth dislikes don’t ask don’t tell arrangements. He’s the secretary of war and not the secretary of defense.
There’s also the chance that the military was asking Claude to do something and Claude refused. Then when the military asked Anthropic to make Claude comply, Anthropic refused so Hegseth made up the story about the Anthropic asking about usage in the Maduro raid.
WSJ: ‘Some administration officials were frustrated that the company was dictating how its technology could be used’ have they literally not encountered terms of service before lol
They probably have but are in the habit of just ignoring terms of service when it comes to classified matters.
Thanks, that’s very helpful! In particular I hadn’t seen the ‘Exceptions to our Usage Policy’ document, and I agree that it suggests that not all the clauses of the standard ToS necessarily apply.
Some points from your comment I’d like to better understand:
When Anthropic did their deal with the military they granted that there are explicit expectations where the military can violate the terms of service
I can read this as either of these two claims:
Per the deal, the military are allowed to violate the ToS in ways that were not specifically negotiated in advance.
The deal specifies certain terms of the ToS that the military is not bound by.
Your comment reads like 1 to me, but I think you probably mean 2? If you mean 1, I’d love to understand what that’s based on.
The deal also did not contain the right of Anthropic to have any knowledge about how it’s software will be used and no enforcement mechanism.
What is the basis of that claim?
have they literally not encountered terms of service before lol
They probably have but are in the habit of just ignoring terms of service when it comes to classified matters.
I’m just being snarky on that one, I figured I could safely bury the snark in a footnote to a footnote but I underestimated the LW readership.
In light of the points you make, I’d be interested to hear your opinion on the degree to which this is a move toward soft nationalization — my sense is that you at least partially disagree? That’s partly just a definitional question, I realize, but I’d love to get your take on it.
This document basically says that Anthropic gives the military exceptions where they can use Claude in ways that violate the standard ToS. Then it gives one example of those exceptions.
When defending against employees and external parties having concerns about the deal with the military, this allowed Anthropic to point at the one exception while ignoring the fact that the policy allows for making other nonpublic exceptions that are made in a classified setting.
What is the basis of that claim?
Anthropic was not publicly assuring people that they had enforcement mechanism that prevented their software from being used in ways that their employees didn’t like. Especially, if you care about alignment, thinking about working mechanism would have been important.
I think it’s quite poor for the alignment community to have let Anthropic get away with that at a time.
I’d be interested to hear your opinion on the degree to which this is a move toward soft nationalization — my sense is that you at least partially disagree?
This news story is a sign of friction between Anthropic and the government which is a bit the opposite of nationalizing Anthropic. I think when the deal between the military and Anthropic was first made and the expectations document was published that the most likely future would be one where the military would sooner or later to whatever it wants with the software.
Given that the Trump administration is accused of doing plenty that isn’t exactly “lawful”, calling for deals that allow all lawful usage is not the maximum demand that Hegseth could make.
I’m just being snarky on that one, I figured I could safely bury the snark in a footnote to a footnote but I underestimated the LW readership.
When Anthropic made the deal, there were plenty of people who thought that the military wouldn’t do things outside of the agreement and this is why Anthropic was okay to make the deal.
This document basically says that Anthropic gives the military exceptions where they can use Claude in ways that violate the standard ToS.
I see, thanks. I agree with the reading that says that Anthropic may write contracts that include loosenings of specific restrictions, and that all other use restrictions remain in force. So in light of that, it’s plausible (though we don’t know without access to more information on the specific deal) that the contract signed with the military includes one or more clauses of the form ‘Clause X of our terms of service do not apply to users under this contract’ possibly with the further language ‘and instead clause Y on the same topic applies’.
For example, it could be that the contract says (to make up an arbitrary example), ‘The clause forbidding users to “target or track a person’s physical location” does not apply under this contract, and is replaced with a clause forbidding users to ‘target or track the physical location of US citizens’.”
This news story is a sign of friction between Anthropic and the government which is a bit the opposite of nationalizing Anthropic. I think when the deal between the military and Anthropic was first made and the expectations document was published that the most likely future would be one where the military would sooner or later to whatever it wants with the software.
I see. To me this looks a move that pushes the world more in that direction, where the military can lawfully do whatever it wants with the software. If they had decided to not care about the law in this case then I expect they would have just done it and not made it public, although it’s plausible that Anthropic’s filters would have (or perhaps even already have) prevented that.
Given the general behavior of the current administration, what’s the probability that this move is principally extractive, i.e. a demand that Anthropic pony up a bribe?
It seems weird for this situation to be resolved by Anthropic offering a bribe but not to change its terms of service. My sense is that Hegseth actually wants an anti-woke, strong military and wouldn’t be satisfied with a bribe.
Agreed that it doesn’t read to me as extractive, though I could certainly be wrong. Aren’t lawsuits usually the Trump administration’s usual first step on extractive moves?
My sense is that Hegseth actually wants an anti-woke, strong military
It would seem strange to me to consider limitations on mass domestic surveillance / autonomous lethal weapons as ‘woke’, since they’re unrelated to DEI etc. That doesn’t necessarily mean that Hegseth isn’t considering them woke, but it would mean they’ve started to use the term to mean something substantially broader — is it your sense that they have?
I don’t think this is ‘woke’ exactly, just that Hegseth has a vision for what the military should be like which is incompatible with Anthropic applying ethical judgement. If Anthropic refuses to surveil Americans, they might push back on other things in the future, and are at especially high risk of refusing to do illegal things Hegseth wants them to. Hegseth thinks generals should be physically strong men who don’t take Harvard classes; likewise, contractors should obey orders and not question their ethics.
A small but noteworthy initial step toward soft nationalization of AI companies
Axios[1]:
Some points of note[2]:
Unless I’m badly mistaken, these are limitations from Anthropic’s standard terms of service, not something recently introduced. So ‘pay a price for forcing our hand like this’ seems misleading at best—presumably the Pentagon read the terms before choosing to sign the contract and incorporating Claude into its processes[3].
This conflict was ostensibly triggered by Anthropic asking about the use of Claude in the Maduro raid[4]. Senior administration official: “Anthropic asked whether their software was used for the raid to capture Maduro, which caused real concerns across the Department of War indicating that they might not approve if it was...Any company that would jeopardize the operational success of our warfighters in the field is one we need to reevaluate our partnership with going forward.”
Axios is correct that designating Anthropic a supply chain risk would be an extremely unusual step; all previous uses of that designation have been for foreign companies with suspected ties to foreign intelligence services (eg Huawei).
Estimates from Opus-4.6 and ChatGPT-5.2 suggest that designating Anthropic a supply chain risk would cost them something like 5% of revenue. This does not include a couple of potential major additional impacts: a) spread from DoD contractors to all federal contractors (I haven’t tried to estimate what this would cost them) and b) impact on Anthropic’s valuation for their intended IPO later this year, which would be ~30x whatever the direct revenue impact turns out to be.
So this looks like a small but real step in the direction of soft nationalization as described by Cheng & Katzke, where the government uses a range of levers to exercise control over the choices of private companies. To be clear: the military choosing not to use Anthropic would not qualify in my view, but taking the highly unusual step of declaring them a supply chain risk (such that ‘anyone who wants to do business with the U.S. military has to cut ties with the company’) does. Of course, it’s possible that this is an empty threat and the Pentagon wouldn’t follow through, but unless Anthropic has reason to believe that, it’s effective either way.
I’m not planning to do further analysis of this issue, but I encourage others to.
Hat tip to my marvelous wife Celene.
Some analysis support from Opus-4.6 and ChatGPT-5.2.
The aspect of all this that I’m most uncertain about is the details of whether the negotiation is about changing the terms of an existing contract vs terms for a future contract. Normally I would expect terms to be agreed upon before signing a contract. But the current two-year contract, signed in July 2025, is an OTA (‘Other Transaction Agreement’), which is a type of prototype agreement which, as I understand it, is less set in stone than the more typical ‘FAR’ contracts. My understanding of US federal government procurement processes is pretty limited, and as far as I can tell the text of the contract isn’t publicly available[5]. But still, surely these terms weren’t a surprise to the Pentagon? If you have reason to think they were, please comment! That would shift my thinking on the degree to which this threat represents a step toward soft nationalization rather than DoD being pissed off that Anthropic is springing terms on them unexpectedly.
Or perhaps that was a pretext. Per Axios, ’An Anthropic spokesperson denied that: “Anthropic has not discussed the use of Claude for specific operations with the Department of War. We have also not discussed this with any industry partners, including Palantir, outside of routine discussions on strictly technical matters.”
I tried to clarify this via Claude and ChatGPT and WSJ[6] without success, but they weren’t high-effort attempts.
WSJ: ‘Some administration officials were frustrated that the company was dictating how its technology could be used’ have they literally not encountered terms of service before lol
When Anthropic did their deal with the military they granted that there are explicit expectations where the military can violate the terms of service and Anthropic can give the military additional expectations that are given in classified documents so that the Anthropic can tell no one including most of their employees who don’t have security clearances about the new expectations.
The deal also did not contain the right of Anthropic to have any knowledge about how it’s software will be used and no enforcement mechanism.
The deal probably looked to the military like it was structured to allow the military to do what it wants while Anthropic can pretend to it’s employees without security clearance and the public that they do things to limit the usage which is fine for the military.
Pete Hegseth dislikes don’t ask don’t tell arrangements. He’s the secretary of war and not the secretary of defense.
There’s also the chance that the military was asking Claude to do something and Claude refused. Then when the military asked Anthropic to make Claude comply, Anthropic refused so Hegseth made up the story about the Anthropic asking about usage in the Maduro raid.
They probably have but are in the habit of just ignoring terms of service when it comes to classified matters.
Thanks, that’s very helpful! In particular I hadn’t seen the ‘Exceptions to our Usage Policy’ document, and I agree that it suggests that not all the clauses of the standard ToS necessarily apply.
Some points from your comment I’d like to better understand:
I can read this as either of these two claims:
Per the deal, the military are allowed to violate the ToS in ways that were not specifically negotiated in advance.
The deal specifies certain terms of the ToS that the military is not bound by.
Your comment reads like 1 to me, but I think you probably mean 2? If you mean 1, I’d love to understand what that’s based on.
What is the basis of that claim?
I’m just being snarky on that one, I figured I could safely bury the snark in a footnote to a footnote but I underestimated the LW readership.
In light of the points you make, I’d be interested to hear your opinion on the degree to which this is a move toward soft nationalization — my sense is that you at least partially disagree? That’s partly just a definitional question, I realize, but I’d love to get your take on it.
This document basically says that Anthropic gives the military exceptions where they can use Claude in ways that violate the standard ToS. Then it gives one example of those exceptions.
When defending against employees and external parties having concerns about the deal with the military, this allowed Anthropic to point at the one exception while ignoring the fact that the policy allows for making other nonpublic exceptions that are made in a classified setting.
Anthropic was not publicly assuring people that they had enforcement mechanism that prevented their software from being used in ways that their employees didn’t like. Especially, if you care about alignment, thinking about working mechanism would have been important.
I think it’s quite poor for the alignment community to have let Anthropic get away with that at a time.
This news story is a sign of friction between Anthropic and the government which is a bit the opposite of nationalizing Anthropic. I think when the deal between the military and Anthropic was first made and the expectations document was published that the most likely future would be one where the military would sooner or later to whatever it wants with the software.
Given that the Trump administration is accused of doing plenty that isn’t exactly “lawful”, calling for deals that allow all lawful usage is not the maximum demand that Hegseth could make.
When Anthropic made the deal, there were plenty of people who thought that the military wouldn’t do things outside of the agreement and this is why Anthropic was okay to make the deal.
I see, thanks. I agree with the reading that says that Anthropic may write contracts that include loosenings of specific restrictions, and that all other use restrictions remain in force. So in light of that, it’s plausible (though we don’t know without access to more information on the specific deal) that the contract signed with the military includes one or more clauses of the form ‘Clause X of our terms of service do not apply to users under this contract’ possibly with the further language ‘and instead clause Y on the same topic applies’.
For example, it could be that the contract says (to make up an arbitrary example), ‘The clause forbidding users to “target or track a person’s physical location” does not apply under this contract, and is replaced with a clause forbidding users to ‘target or track the physical location of US citizens’.”
I see. To me this looks a move that pushes the world more in that direction, where the military can lawfully do whatever it wants with the software. If they had decided to not care about the law in this case then I expect they would have just done it and not made it public, although it’s plausible that Anthropic’s filters would have (or perhaps even already have) prevented that.
Given the general behavior of the current administration, what’s the probability that this move is principally extractive, i.e. a demand that Anthropic pony up a bribe?
It seems weird for this situation to be resolved by Anthropic offering a bribe but not to change its terms of service. My sense is that Hegseth actually wants an anti-woke, strong military and wouldn’t be satisfied with a bribe.
Agreed that it doesn’t read to me as extractive, though I could certainly be wrong. Aren’t lawsuits usually the Trump administration’s usual first step on extractive moves?
It would seem strange to me to consider limitations on mass domestic surveillance / autonomous lethal weapons as ‘woke’, since they’re unrelated to DEI etc. That doesn’t necessarily mean that Hegseth isn’t considering them woke, but it would mean they’ve started to use the term to mean something substantially broader — is it your sense that they have?
I don’t think this is ‘woke’ exactly, just that Hegseth has a vision for what the military should be like which is incompatible with Anthropic applying ethical judgement. If Anthropic refuses to surveil Americans, they might push back on other things in the future, and are at especially high risk of refusing to do illegal things Hegseth wants them to. Hegseth thinks generals should be physically strong men who don’t take Harvard classes; likewise, contractors should obey orders and not question their ethics.