He seems to mostly support the decisions of Department of Defense. I find his viewpoint reasonable and self-consistent enough on a quick read. On a vibes level I disagree with him, but I couldn’t integrate his arguments yet.
At the same time, what is the standard by which it should be decided what is allowed and not allowed if not laws, which are passed by an elected Congress? Anthropic’s position is that Amodei — who I am using as a stand-in for Anthropic’s management and its board — ought to decide what its models are used for, despite the fact that Amodei is not elected and not accountable to the public.
And, on the second point, who decides when and in what way American military capabilities are used? That is the responsibility of the Department of War, which ultimately answers to the President, who also is elected. Once again, however, Anthropic’s position is that an unaccountable Amodei can unilaterally restrict what its models are used for.
.
In fact, Amodei already answered the question: if nuclear weapons were developed by a private company, and that private company sought to dictate terms to the U.S. military, the U.S. would absolutely be incentivized to destroy that company.
.
Anthropic talks a lot about alignment; this insistence on controlling the U.S. military, however, is fundamentally misaligned with reality. Current AI models are obviously not yet so powerful that they rival the U.S. military; if that is the trajectory, however — and no one has been more vocal in arguing for that trajectory than Amodei — then it seems to me the choice facing the U.S. is actually quite binary:
Option 1 is that Anthropic accepts a subservient position relative to the U.S. government, and does not seek to retain ultimate decision-making power about how its models are used, instead leaving that to Congress and the President.
Option 2 is that the U.S. government either destroys Anthropic or removes Amodei.
The part of this argument that doesn’t work for me is, why Anthropic in particular?
If AI is a nuclear-level technology, then I’d expect the government to be nationalizing all of the AI companies, regardless of contract negotiations, but so far all we’re hearing is that Anthropic specifically should be nationalized, but Google and OpenAI should continue operating as private companies (in one case by not selling this tech at to the military at all, and in another allegedly having the same contract terms as Anthropic).
I’m somewhat sympathetic to both views [AI is normal tech and private property should be respected / AI is a military technology and should be controlled by the government], but not to the position that Claude in particular is a military tech and ChatGPT, Gemini (and Deepseek) aren’t.
Anthropic and Alignment (Ben Thompson in his blog Stratechery)
Warning: I skimmed the post.
He seems to mostly support the decisions of Department of Defense. I find his viewpoint reasonable and self-consistent enough on a quick read. On a vibes level I disagree with him, but I couldn’t integrate his arguments yet.
The part of this argument that doesn’t work for me is, why Anthropic in particular?
If AI is a nuclear-level technology, then I’d expect the government to be nationalizing all of the AI companies, regardless of contract negotiations, but so far all we’re hearing is that Anthropic specifically should be nationalized, but Google and OpenAI should continue operating as private companies (in one case by not selling this tech at to the military at all, and in another allegedly having the same contract terms as Anthropic).
I’m somewhat sympathetic to both views [AI is normal tech and private property should be respected / AI is a military technology and should be controlled by the government], but not to the position that Claude in particular is a military tech and ChatGPT, Gemini (and Deepseek) aren’t.
FWIW I find Dean Ball’s contra take more persuasive (Section IV).