It seems to me that there’s a key aspect of this that isn’t being discussed: The Anthropic agreement involved running their model on government servers, which OpenAI would not allow from the beginning, believing it to be too risky. As I understand it, OpenAI’s agreement mandates cloud-only deployment and prohibits the models being hosted on independent government servers or edge devices.
That seems like a big deal because, with it running on government servers, it was not possible for Anthropic to control the code to the same degree that OpenAI can on its own servers, e.g., while Anthropic built safety guardrails into the model they gave the government, if the government (or its partner, Palantir) found a way to bypass or override those guardrails, there was no way for Anthropic to stop it—or even know—and it was possible for the government (again, or its partner) to deploy Anthropic’s model on “edge” devices. There’s also the issue of model updates, but that seems relatively minor.
It seems to me that there’s a key aspect of this that isn’t being discussed: The Anthropic agreement involved running their model on government servers, which OpenAI would not allow from the beginning, believing it to be too risky. As I understand it, OpenAI’s agreement mandates cloud-only deployment and prohibits the models being hosted on independent government servers or edge devices.
That seems like a big deal because, with it running on government servers, it was not possible for Anthropic to control the code to the same degree that OpenAI can on its own servers, e.g., while Anthropic built safety guardrails into the model they gave the government, if the government (or its partner, Palantir) found a way to bypass or override those guardrails, there was no way for Anthropic to stop it—or even know—and it was possible for the government (again, or its partner) to deploy Anthropic’s model on “edge” devices. There’s also the issue of model updates, but that seems relatively minor.