This document basically says that Anthropic gives the military exceptions where they can use Claude in ways that violate the standard ToS. Then it gives one example of those exceptions.
When defending against employees and external parties having concerns about the deal with the military, this allowed Anthropic to point at the one exception while ignoring the fact that the policy allows for making other nonpublic exceptions that are made in a classified setting.
What is the basis of that claim?
Anthropic was not publicly assuring people that they had enforcement mechanism that prevented their software from being used in ways that their employees didn’t like. Especially, if you care about alignment, thinking about working mechanism would have been important.
I think it’s quite poor for the alignment community to have let Anthropic get away with that at a time.
I’d be interested to hear your opinion on the degree to which this is a move toward soft nationalization — my sense is that you at least partially disagree?
This news story is a sign of friction between Anthropic and the government which is a bit the opposite of nationalizing Anthropic. I think when the deal between the military and Anthropic was first made and the expectations document was published that the most likely future would be one where the military would sooner or later to whatever it wants with the software.
Given that the Trump administration is accused of doing plenty that isn’t exactly “lawful”, calling for deals that allow all lawful usage is not the maximum demand that Hegseth could make.
I’m just being snarky on that one, I figured I could safely bury the snark in a footnote to a footnote but I underestimated the LW readership.
When Anthropic made the deal, there were plenty of people who thought that the military wouldn’t do things outside of the agreement and this is why Anthropic was okay to make the deal.
This document basically says that Anthropic gives the military exceptions where they can use Claude in ways that violate the standard ToS.
I see, thanks. I agree with the reading that says that Anthropic may write contracts that include loosenings of specific restrictions, and that all other use restrictions remain in force. So in light of that, it’s plausible (though we don’t know without access to more information on the specific deal) that the contract signed with the military includes one or more clauses of the form ‘Clause X of our terms of service do not apply to users under this contract’ possibly with the further language ‘and instead clause Y on the same topic applies’.
For example, it could be that the contract says (to make up an arbitrary example), ‘The clause forbidding users to “target or track a person’s physical location” does not apply under this contract, and is replaced with a clause forbidding users to ‘target or track the physical location of US citizens’.”
This news story is a sign of friction between Anthropic and the government which is a bit the opposite of nationalizing Anthropic. I think when the deal between the military and Anthropic was first made and the expectations document was published that the most likely future would be one where the military would sooner or later to whatever it wants with the software.
I see. To me this looks a move that pushes the world more in that direction, where the military can lawfully do whatever it wants with the software. If they had decided to not care about the law in this case then I expect they would have just done it and not made it public, although it’s plausible that Anthropic’s filters would have (or perhaps even already have) prevented that.
If you have a situation where most of what happens is invisible (because it’s classified), then what a move reveal about what’s invisible can be a lot more significant than the individual move.
Given that the previous situation was that Anthropic wants to make deals out of the public eye about giving the military more access, having public evidence of friction is evidence that Anthropic doesn’t just roll over in private even so they set up a clever way to be able to roll over in private without their employees or the public knowing that they have rolled over.
that pushes the world more in that direction, where the military can lawfully do whatever it wants with the software.
I’m not sure whether you meant the sentence the way you wrote it. There’s a difference between being able to “lawfully do whatever the military wants” and “the military being able to do whatever it wants as long as it’s lawful”.
Take for example the Obama administration was assassinating US citizens abroad away from the battlefield without a trial. I don’t think that’s lawful. If I ask Claude whether the US government can lawfully assassinate it’s own citizens abroad I don’t think Claude would say that it can lawfully do that. Yet, it’s the kind of thing that the US military did even pre-Trump.
An agreement with an AI company that the US military can use the AI for all lawful uses doesn’t make it legal.
what a move reveal about what’s invisible can be a lot more significant than the individual move.
What do you see this as revealing? I think I’m missing the implication.
they set up a clever way to be able to roll over in private without their employees or the public knowing that they have rolled over.
I agree that that reading is compatible with the known facts, though a less cynical reading is that Anthropic leadership just genuinely prefer that their products not be used for mass surveillance of Americans.
that pushes the world more in that direction, where the military can lawfully do whatever it wants with the software.
I’m not sure whether you meant the sentence the way you wrote it. There’s a difference between being able to “lawfully do whatever the military wants” and “the military being able to do whatever it wants as long as it’s lawful”.
Sorry, that was a bit unclear on my part. I meant to distinguish between two cases:
In the current world, unless they can’t get Claude to cooperate, the military can do whatever they want, but not lawfully (since they’d be violating the terms of the contract).
Whereas if Anthropic drops the restrictions, the military can lawfully do whatever they want (since they’d no longer be violating the terms of the contract).
I agree that even if Anthropic drops their restrictions, the military might use Claude in ways that are unlawful for other reasons (eg that they violate US treaties or the Geneva convention), but that’s not the distinction I meant to point to.
It reveals that Anthropic is not currently nationalized and is making independent decisions that go against what the administration wants. Decisions that are significant enough to have a public conflict around it.
I agree that that reading is compatible with the known facts, though a less cynical reading is that Anthropic leadership just genuinely prefer that their products not be used for mass surveillance of Americans.
If that’s what they sincerely believe, making an expectation policy that allows them to let their products be used in secret without telling their employees and the public is stupid.
You would want public commitments to principles like that with canary documents, for game theoretic reasons that precommit not to let their products be used that way. If you really care about that, you would have a public page listing all exceptions that are made to the Terms of Service. Having a policy were you can make secret expectations to the Terms of Service is bound to create situations where classified demands are made to make additional expectations with reduced ability to push back.
In the current world, unless they can’t get Claude to cooperate, the military can do whatever they want, but not lawfully (since they’d be violating the terms of the contract).
Violating a contract is not automatically doing something unlawfully. There’s no law in the US that says you have to follow every contract. If someone is in breach of contract you can sue them in civil court to recover damages or get an injunction.
Having a policy were you can make secret expectations to the Terms of Service is bound to create situations where classified demands are made to make additional expectations with reduced ability to push back.
Great point.
Violating a contract is not automatically doing something unlawfully.
This document basically says that Anthropic gives the military exceptions where they can use Claude in ways that violate the standard ToS. Then it gives one example of those exceptions.
When defending against employees and external parties having concerns about the deal with the military, this allowed Anthropic to point at the one exception while ignoring the fact that the policy allows for making other nonpublic exceptions that are made in a classified setting.
Anthropic was not publicly assuring people that they had enforcement mechanism that prevented their software from being used in ways that their employees didn’t like. Especially, if you care about alignment, thinking about working mechanism would have been important.
I think it’s quite poor for the alignment community to have let Anthropic get away with that at a time.
This news story is a sign of friction between Anthropic and the government which is a bit the opposite of nationalizing Anthropic. I think when the deal between the military and Anthropic was first made and the expectations document was published that the most likely future would be one where the military would sooner or later to whatever it wants with the software.
Given that the Trump administration is accused of doing plenty that isn’t exactly “lawful”, calling for deals that allow all lawful usage is not the maximum demand that Hegseth could make.
When Anthropic made the deal, there were plenty of people who thought that the military wouldn’t do things outside of the agreement and this is why Anthropic was okay to make the deal.
I see, thanks. I agree with the reading that says that Anthropic may write contracts that include loosenings of specific restrictions, and that all other use restrictions remain in force. So in light of that, it’s plausible (though we don’t know without access to more information on the specific deal) that the contract signed with the military includes one or more clauses of the form ‘Clause X of our terms of service do not apply to users under this contract’ possibly with the further language ‘and instead clause Y on the same topic applies’.
For example, it could be that the contract says (to make up an arbitrary example), ‘The clause forbidding users to “target or track a person’s physical location” does not apply under this contract, and is replaced with a clause forbidding users to ‘target or track the physical location of US citizens’.”
I see. To me this looks a move that pushes the world more in that direction, where the military can lawfully do whatever it wants with the software. If they had decided to not care about the law in this case then I expect they would have just done it and not made it public, although it’s plausible that Anthropic’s filters would have (or perhaps even already have) prevented that.
If you have a situation where most of what happens is invisible (because it’s classified), then what a move reveal about what’s invisible can be a lot more significant than the individual move.
Given that the previous situation was that Anthropic wants to make deals out of the public eye about giving the military more access, having public evidence of friction is evidence that Anthropic doesn’t just roll over in private even so they set up a clever way to be able to roll over in private without their employees or the public knowing that they have rolled over.
I’m not sure whether you meant the sentence the way you wrote it. There’s a difference between being able to “lawfully do whatever the military wants” and “the military being able to do whatever it wants as long as it’s lawful”.
Take for example the Obama administration was assassinating US citizens abroad away from the battlefield without a trial. I don’t think that’s lawful. If I ask Claude whether the US government can lawfully assassinate it’s own citizens abroad I don’t think Claude would say that it can lawfully do that. Yet, it’s the kind of thing that the US military did even pre-Trump.
An agreement with an AI company that the US military can use the AI for all lawful uses doesn’t make it legal.
What do you see this as revealing? I think I’m missing the implication.
I agree that that reading is compatible with the known facts, though a less cynical reading is that Anthropic leadership just genuinely prefer that their products not be used for mass surveillance of Americans.
Sorry, that was a bit unclear on my part. I meant to distinguish between two cases:
In the current world, unless they can’t get Claude to cooperate, the military can do whatever they want, but not lawfully (since they’d be violating the terms of the contract).
Whereas if Anthropic drops the restrictions, the military can lawfully do whatever they want (since they’d no longer be violating the terms of the contract).
I agree that even if Anthropic drops their restrictions, the military might use Claude in ways that are unlawful for other reasons (eg that they violate US treaties or the Geneva convention), but that’s not the distinction I meant to point to.
It reveals that Anthropic is not currently nationalized and is making independent decisions that go against what the administration wants. Decisions that are significant enough to have a public conflict around it.
If that’s what they sincerely believe, making an expectation policy that allows them to let their products be used in secret without telling their employees and the public is stupid.
You would want public commitments to principles like that with canary documents, for game theoretic reasons that precommit not to let their products be used that way. If you really care about that, you would have a public page listing all exceptions that are made to the Terms of Service. Having a policy were you can make secret expectations to the Terms of Service is bound to create situations where classified demands are made to make additional expectations with reduced ability to push back.
Violating a contract is not automatically doing something unlawfully. There’s no law in the US that says you have to follow every contract. If someone is in breach of contract you can sue them in civil court to recover damages or get an injunction.
Great point.
Good correction, thanks.