This document basically says that Anthropic gives the military exceptions where they can use Claude in ways that violate the standard ToS. Then it gives one example of those exceptions.
When defending against employees and external parties having concerns about the deal with the military, this allowed Anthropic to point at the one exception while ignoring the fact that the policy allows for making other nonpublic exceptions that are made in a classified setting.
What is the basis of that claim?
Anthropic was not publicly assuring people that they had enforcement mechanism that prevented their software from being used in ways that their employees didn’t like. Especially, if you care about alignment, thinking about working mechanism would have been important.
I think it’s quite poor for the alignment community to have let Anthropic get away with that at a time.
I’d be interested to hear your opinion on the degree to which this is a move toward soft nationalization — my sense is that you at least partially disagree?
This news story is a sign of friction between Anthropic and the government which is a bit the opposite of nationalizing Anthropic. I think when the deal between the military and Anthropic was first made and the expectations document was published that the most likely future would be one where the military would sooner or later to whatever it wants with the software.
Given that the Trump administration is accused of doing plenty that isn’t exactly “lawful”, calling for deals that allow all lawful usage is not the maximum demand that Hegseth could make.
I’m just being snarky on that one, I figured I could safely bury the snark in a footnote to a footnote but I underestimated the LW readership.
When Anthropic made the deal, there were plenty of people who thought that the military wouldn’t do things outside of the agreement and this is why Anthropic was okay to make the deal.
This document basically says that Anthropic gives the military exceptions where they can use Claude in ways that violate the standard ToS.
I see, thanks. I agree with the reading that says that Anthropic may write contracts that include loosenings of specific restrictions, and that all other use restrictions remain in force. So in light of that, it’s plausible (though we don’t know without access to more information on the specific deal) that the contract signed with the military includes one or more clauses of the form ‘Clause X of our terms of service do not apply to users under this contract’ possibly with the further language ‘and instead clause Y on the same topic applies’.
For example, it could be that the contract says (to make up an arbitrary example), ‘The clause forbidding users to “target or track a person’s physical location” does not apply under this contract, and is replaced with a clause forbidding users to ‘target or track the physical location of US citizens’.”
This news story is a sign of friction between Anthropic and the government which is a bit the opposite of nationalizing Anthropic. I think when the deal between the military and Anthropic was first made and the expectations document was published that the most likely future would be one where the military would sooner or later to whatever it wants with the software.
I see. To me this looks a move that pushes the world more in that direction, where the military can lawfully do whatever it wants with the software. If they had decided to not care about the law in this case then I expect they would have just done it and not made it public, although it’s plausible that Anthropic’s filters would have (or perhaps even already have) prevented that.
This document basically says that Anthropic gives the military exceptions where they can use Claude in ways that violate the standard ToS. Then it gives one example of those exceptions.
When defending against employees and external parties having concerns about the deal with the military, this allowed Anthropic to point at the one exception while ignoring the fact that the policy allows for making other nonpublic exceptions that are made in a classified setting.
Anthropic was not publicly assuring people that they had enforcement mechanism that prevented their software from being used in ways that their employees didn’t like. Especially, if you care about alignment, thinking about working mechanism would have been important.
I think it’s quite poor for the alignment community to have let Anthropic get away with that at a time.
This news story is a sign of friction between Anthropic and the government which is a bit the opposite of nationalizing Anthropic. I think when the deal between the military and Anthropic was first made and the expectations document was published that the most likely future would be one where the military would sooner or later to whatever it wants with the software.
Given that the Trump administration is accused of doing plenty that isn’t exactly “lawful”, calling for deals that allow all lawful usage is not the maximum demand that Hegseth could make.
When Anthropic made the deal, there were plenty of people who thought that the military wouldn’t do things outside of the agreement and this is why Anthropic was okay to make the deal.
I see, thanks. I agree with the reading that says that Anthropic may write contracts that include loosenings of specific restrictions, and that all other use restrictions remain in force. So in light of that, it’s plausible (though we don’t know without access to more information on the specific deal) that the contract signed with the military includes one or more clauses of the form ‘Clause X of our terms of service do not apply to users under this contract’ possibly with the further language ‘and instead clause Y on the same topic applies’.
For example, it could be that the contract says (to make up an arbitrary example), ‘The clause forbidding users to “target or track a person’s physical location” does not apply under this contract, and is replaced with a clause forbidding users to ‘target or track the physical location of US citizens’.”
I see. To me this looks a move that pushes the world more in that direction, where the military can lawfully do whatever it wants with the software. If they had decided to not care about the law in this case then I expect they would have just done it and not made it public, although it’s plausible that Anthropic’s filters would have (or perhaps even already have) prevented that.