Anthropic is willing to compromise and is okay with military use including kinetic weapons, but wants to say no to fully autonomous weapons and domestic surveillance.
I believe that a lot of this is a misunderstanding
Why? My strong-model-loosely-held is:
The Pentagon wants to use AI for domestic surveillance. Like, obviously. Duh.
Anthropic’s decision to raise a fuss about it, instead of tacitly cooperating, marks the company as startingly “unserious”/realpolitikally inept.
The Pentagon now wants to make an example out of them to ensure the other AGI labs don’t act up in the same manner.
Honestly, I’m surprised, great showing of spine by Anthropic, I did not expect the company to hold onto any principles the moment it got costly. If this is what it looks like, and if they don’t fold, this would be my first meaningful positive update on an AGI lab in years.
Why? My strong-model-loosely-held is:
The Pentagon wants to use AI for domestic surveillance. Like, obviously. Duh.
Anthropic’s decision to raise a fuss about it, instead of tacitly cooperating, marks the company as startingly “unserious”/realpolitikally inept.
The Pentagon now wants to make an example out of them to ensure the other AGI labs don’t act up in the same manner.
Honestly, I’m surprised, great showing of spine by Anthropic, I did not expect the company to hold onto any principles the moment it got costly. If this is what it looks like, and if they don’t fold, this would be my first meaningful positive update on an AGI lab in years.