I claim that even if the openai contract is not meaningfully weaker safety wise, it is still bad for openai to publicly signal solidarity with ant but then sign with DoW.
suppose hypothetically the only difference between the openai and anthropic contracts is that the DoW wanted a snicker bar, and anthropic didn’t want to give DoW the snickers bar. even then, it would be a huge dick move for openai to publicly signal solidarity, and then sign with DoW to give them the snickers bar.
OAI genuinely outplayed Anthropic here. The critical success world for OAI would be if OAI gets good PR from “solidarity”, replaces Ant under the ~same terms, and there is enough uncertainty of Anthropic being a supply chain risk that eg Amazon stops providing them compute, basically killing the company.
Most of this is still on the table, because Anthropic was too concerned about appearing principled and was exploited by DoW and Altman.
I can see the story where there was a strong opportunity for a competitor here, and OpenAI successfully seized it (perhaps Google DeepMind could have as well, but I don’t view them as nimble as OpenAI). I don’t see a story where Anthropic had a clear alternative play that was much better, especially once the USG threatened with labeling them a supply-chain risk.
Anthropic could have negotiated before USG publicly threatened to label them a supply chain risk. My guess is they were mainly limited by the erosion of their own morals and by Anthropic staff quitting, and they could have acquiesced with diplomatic language even within those limitations, maybe even after the crisis went public. Claude is only getting better, so the default path is building trust with the government. They could probably have found a better stand to take later, when they have more power.
Perhaps. I think writing things into contracts is a great way to make sure that they happen, and if the counterparty is unwilling to sign them into contracts, then this is a strong sign that you won’t be able to make it happen later. It would have significantly increased the adversarial relationship between Anthropic and the USG for them to politely remove it from the contract and then work hard internally to make sure that it never got used that way. Maybe it would’ve been worth it, but I’m not convinced.
Oh I don’t think they could have prevented USG from using Claude for mass domestic surveillance. Autonomous weapons maybe since it’s a reliability issue the military would agree with. They would need to sacrifice their principles in order to get Claude more integrated into the government, which could be good or bad for us but would have been in Anthropic’s interest.
I have a little stored thought which sometimes triggers, and it reads:
“If you find yourself being forced to choose between two or more extremely bad options that involve burning your values, your resources, or your life, the truth is that you lost around three moves ago and are living out the equivalent of a forced mate in chess. You’ve already lost, so stop playing and find a better game to spend time on if at all possible.”
I think it’s unclear that this was overall bad for Anthropic/Amodei if you factor in the reputational and ideological boost they got (“aura farming” according to roon).
I’m relatively less interested in a competitive framing between OpenAI and Anthropic to see i.e. “who played it better”. First, that framing suggests there was just one game being played. It seems to be necessary to view it as a progression of different games.
To a first approximation, my guess is by the time this popped into the public spotlight, the die was largely cast (so to speak). It was, more or less, a strategy by Hegseth to put Anthropic in an impossible bind.
Second, that kind of framing feels too much like so many news stories I read that try to fasten sports metaphors onto real world events to make juicy narratives. This isn’t a very good “reason” I admit, but it sort of explains why my alarm bells started ringing on that frame.
Personally, I first want to learn about what happened and when. After that, maybe I would try to analyze and learn lessons. I expect some people have much more of close and personal information; I hope this will become wider known over time.
I claim that even if the openai contract is not meaningfully weaker safety wise, it is still bad for openai to publicly signal solidarity with ant but then sign with DoW.
suppose hypothetically the only difference between the openai and anthropic contracts is that the DoW wanted a snicker bar, and anthropic didn’t want to give DoW the snickers bar. even then, it would be a huge dick move for openai to publicly signal solidarity, and then sign with DoW to give them the snickers bar.
OAI genuinely outplayed Anthropic here. The critical success world for OAI would be if OAI gets good PR from “solidarity”, replaces Ant under the ~same terms, and there is enough uncertainty of Anthropic being a supply chain risk that eg Amazon stops providing them compute, basically killing the company.
Most of this is still on the table, because Anthropic was too concerned about appearing principled and was exploited by DoW and Altman.
I can see the story where there was a strong opportunity for a competitor here, and OpenAI successfully seized it (perhaps Google DeepMind could have as well, but I don’t view them as nimble as OpenAI). I don’t see a story where Anthropic had a clear alternative play that was much better, especially once the USG threatened with labeling them a supply-chain risk.
Anthropic could have negotiated before USG publicly threatened to label them a supply chain risk. My guess is they were mainly limited by the erosion of their own morals and by Anthropic staff quitting, and they could have acquiesced with diplomatic language even within those limitations, maybe even after the crisis went public. Claude is only getting better, so the default path is building trust with the government. They could probably have found a better stand to take later, when they have more power.
Perhaps. I think writing things into contracts is a great way to make sure that they happen, and if the counterparty is unwilling to sign them into contracts, then this is a strong sign that you won’t be able to make it happen later. It would have significantly increased the adversarial relationship between Anthropic and the USG for them to politely remove it from the contract and then work hard internally to make sure that it never got used that way. Maybe it would’ve been worth it, but I’m not convinced.
Oh I don’t think they could have prevented USG from using Claude for mass domestic surveillance. Autonomous weapons maybe since it’s a reliability issue the military would agree with. They would need to sacrifice their principles in order to get Claude more integrated into the government, which could be good or bad for us but would have been in Anthropic’s interest.
It’s a tough game to be in. Sometimes the only winning move is not to play.
I have a little stored thought which sometimes triggers, and it reads:
“If you find yourself being forced to choose between two or more extremely bad options that involve burning your values, your resources, or your life, the truth is that you lost around three moves ago and are living out the equivalent of a forced mate in chess. You’ve already lost, so stop playing and find a better game to spend time on if at all possible.”
Sadly, sometimes you don’t have the option of not playing.
I think this comment aged very poorly
I think it’s unclear that this was overall bad for Anthropic/Amodei if you factor in the reputational and ideological boost they got (“aura farming” according to roon).
This is pretty heartening. I hope it’s enough to compensate them, and perhaps time will tell.
I’m relatively less interested in a competitive framing between OpenAI and Anthropic to see i.e. “who played it better”. First, that framing suggests there was just one game being played. It seems to be necessary to view it as a progression of different games.
To a first approximation, my guess is by the time this popped into the public spotlight, the die was largely cast (so to speak). It was, more or less, a strategy by Hegseth to put Anthropic in an impossible bind.
Second, that kind of framing feels too much like so many news stories I read that try to fasten sports metaphors onto real world events to make juicy narratives. This isn’t a very good “reason” I admit, but it sort of explains why my alarm bells started ringing on that frame.
Personally, I first want to learn about what happened and when. After that, maybe I would try to analyze and learn lessons. I expect some people have much more of close and personal information; I hope this will become wider known over time.
Is it publicly known what is in fact the difference between the two contracts?
https://x.com/justanotherlaw/status/2027855993921802484
Yeah I would have liked to see them sign the contract conditional on something like not labeling Anthropic a supply chain risk.