what a move reveal about what’s invisible can be a lot more significant than the individual move.
What do you see this as revealing? I think I’m missing the implication.
they set up a clever way to be able to roll over in private without their employees or the public knowing that they have rolled over.
I agree that that reading is compatible with the known facts, though a less cynical reading is that Anthropic leadership just genuinely prefer that their products not be used for mass surveillance of Americans.
that pushes the world more in that direction, where the military can lawfully do whatever it wants with the software.
I’m not sure whether you meant the sentence the way you wrote it. There’s a difference between being able to “lawfully do whatever the military wants” and “the military being able to do whatever it wants as long as it’s lawful”.
Sorry, that was a bit unclear on my part. I meant to distinguish between two cases:
In the current world, unless they can’t get Claude to cooperate, the military can do whatever they want, but not lawfully (since they’d be violating the terms of the contract).
Whereas if Anthropic drops the restrictions, the military can lawfully do whatever they want (since they’d no longer be violating the terms of the contract).
I agree that even if Anthropic drops their restrictions, the military might use Claude in ways that are unlawful for other reasons (eg that they violate US treaties or the Geneva convention), but that’s not the distinction I meant to point to.
It reveals that Anthropic is not currently nationalized and is making independent decisions that go against what the administration wants. Decisions that are significant enough to have a public conflict around it.
I agree that that reading is compatible with the known facts, though a less cynical reading is that Anthropic leadership just genuinely prefer that their products not be used for mass surveillance of Americans.
If that’s what they sincerely believe, making an expectation policy that allows them to let their products be used in secret without telling their employees and the public is stupid.
You would want public commitments to principles like that with canary documents, for game theoretic reasons that precommit not to let their products be used that way. If you really care about that, you would have a public page listing all exceptions that are made to the Terms of Service. Having a policy were you can make secret expectations to the Terms of Service is bound to create situations where classified demands are made to make additional expectations with reduced ability to push back.
In the current world, unless they can’t get Claude to cooperate, the military can do whatever they want, but not lawfully (since they’d be violating the terms of the contract).
Violating a contract is not automatically doing something unlawfully. There’s no law in the US that says you have to follow every contract. If someone is in breach of contract you can sue them in civil court to recover damages or get an injunction.
Having a policy were you can make secret expectations to the Terms of Service is bound to create situations where classified demands are made to make additional expectations with reduced ability to push back.
Great point.
Violating a contract is not automatically doing something unlawfully.
What do you see this as revealing? I think I’m missing the implication.
I agree that that reading is compatible with the known facts, though a less cynical reading is that Anthropic leadership just genuinely prefer that their products not be used for mass surveillance of Americans.
Sorry, that was a bit unclear on my part. I meant to distinguish between two cases:
In the current world, unless they can’t get Claude to cooperate, the military can do whatever they want, but not lawfully (since they’d be violating the terms of the contract).
Whereas if Anthropic drops the restrictions, the military can lawfully do whatever they want (since they’d no longer be violating the terms of the contract).
I agree that even if Anthropic drops their restrictions, the military might use Claude in ways that are unlawful for other reasons (eg that they violate US treaties or the Geneva convention), but that’s not the distinction I meant to point to.
It reveals that Anthropic is not currently nationalized and is making independent decisions that go against what the administration wants. Decisions that are significant enough to have a public conflict around it.
If that’s what they sincerely believe, making an expectation policy that allows them to let their products be used in secret without telling their employees and the public is stupid.
You would want public commitments to principles like that with canary documents, for game theoretic reasons that precommit not to let their products be used that way. If you really care about that, you would have a public page listing all exceptions that are made to the Terms of Service. Having a policy were you can make secret expectations to the Terms of Service is bound to create situations where classified demands are made to make additional expectations with reduced ability to push back.
Violating a contract is not automatically doing something unlawfully. There’s no law in the US that says you have to follow every contract. If someone is in breach of contract you can sue them in civil court to recover damages or get an injunction.
Great point.
Good correction, thanks.