Why is there so much emphasis on OpenAI and its arrangement with the Department of War relative to GDM’s and xAI’s, and is that rational? While OpenAI seems like it’s behaving much worse than Anthropic, it seems arguably better than those other two, and I’m worried this is a case of it being punished for doing more than nothing (or rather, that some of the ire currently focused on OpenAI should focus on them).
Agree that OpenAI’s and Department of War’s comms about their arrangement was weird, sketchy, and triggering (but not necessarily worse than complete silence in my mind)
I am criticizing OpenAI not just because of the terms of their contract, but because they previously said that they had the same redlines as Anthropic, and then not 2 days latter, signed a contract abandoning those redlines, while quite transparently lying about whether the redlines were protected.
That is bad behavior, and I’m glad they’re getting pushback about it. When you claim to stand for principles, you’re taking on additional social cost when you abandon those principles.
I wouldn’t care nearly as much, if they had accepted the the contract that they accepted but had never made any pretense of standing for their supposed redlines. Which is is what xAI (and possibly GDM?) have done.
Further, from other incidents, I believe Sam Altman to be dishonest. This is a very clear and legible instance of his dishonesty. I’m in favor of more people having the (IMO correct) understanding that Sam shouldn’t be trusted. As a matter of political expediency, promoting this incident is an opportunity to inform more people about Sam’s honesty and trustworthiness.
Finally, I think because this has gotten a lot of media attention, it could turn out to be a leverage point for broader changes. If OpenAI decided to change it’s contract with the DoD, that might also put pressure on DeepMind, or might lead to changes in legislation that would close the loopholes that make this a problem. (That second thing seems like a low probability hope, to be clear).
Why is there so much emphasis on OpenAI and its arrangement with the Department of War relative to GDM’s and xAI’s, and is that rational? While OpenAI seems like it’s behaving much worse than Anthropic, it seems arguably better than those other two, and I’m worried this is a case of it being punished for doing more than nothing (or rather, that some of the ire currently focused on OpenAI should focus on them).
Agree that OpenAI’s and Department of War’s comms about their arrangement was weird, sketchy, and triggering (but not necessarily worse than complete silence in my mind)
I am criticizing OpenAI not just because of the terms of their contract, but because they previously said that they had the same redlines as Anthropic, and then not 2 days latter, signed a contract abandoning those redlines, while quite transparently lying about whether the redlines were protected.
That is bad behavior, and I’m glad they’re getting pushback about it. When you claim to stand for principles, you’re taking on additional social cost when you abandon those principles.
I wouldn’t care nearly as much, if they had accepted the the contract that they accepted but had never made any pretense of standing for their supposed redlines. Which is is what xAI (and possibly GDM?) have done.
Further, from other incidents, I believe Sam Altman to be dishonest. This is a very clear and legible instance of his dishonesty. I’m in favor of more people having the (IMO correct) understanding that Sam shouldn’t be trusted. As a matter of political expediency, promoting this incident is an opportunity to inform more people about Sam’s honesty and trustworthiness.
Finally, I think because this has gotten a lot of media attention, it could turn out to be a leverage point for broader changes. If OpenAI decided to change it’s contract with the DoD, that might also put pressure on DeepMind, or might lead to changes in legislation that would close the loopholes that make this a problem. (That second thing seems like a low probability hope, to be clear).