It is not clear if the deal OpenAI is negotiating with the DoW provides meaningful red lines against LAWs or domestic mass surveillance.
Edit based on new information: it appears it does not. It seems like “all lawful use” with examples added for clarity. Original comment below.
Reportedly: “OpenAI is pursuing a deal ‘that allows our models to be deployed in classified environments and that fits with our principles. … We would ask for the contract to cover any use except those which are unlawful or unsuited to cloud deployments, such as domestic surveillance and autonomous offensive weapons.’”
Sam Altman seems to be taking the moral high ground here and people have been patting him on the back, but I am unclear on a lot of crucial details, so I’m not ready to pop the champagne just yet.
He says “We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-stakes automated decisions. These are our main red lines” but the deal he’s proposing doesn’t clearly seem to actually enforce that!
One read is they’re planning to fulfill the DoW’s requests for ‘all lawful use’ and their only other restriction is whether they even have the technical capacity to meet the DoW’s requests.
If this interpretation is wrong, it would be great to get clarity on that. I recognize that these kinds of negotiations are tense and that the Pentagon probably wants to save face. But I think it would be premature to congratulate OpenAI before we have actually confirmed that they’re not just caving in and spinning it as though they’re not.
Here are some questions where if I got clear answers that would ameliorate my concerns:
Who determines if these models are “unsuited for cloud deployment”? The DoW or OpenAI? How would they make that determination?
What makes mass domestic surveillance and/or lethal autonomous weapons “unsuitable for cloud deployment”? Is that likely to change?
Who determines if use is “unlawful” in this agreement, OpenAI or the DoW, using the same mechanisms they would use for any other lawful use agreement?
Would this allow types of domestic mass surveillance that Anthropic’s red lines would have ruled out? “Mass surveillance” isn’t a legal term, so when the Department of War says it’s illegal, it’s not obvious what they mean. Some kinds of things I’d consider domestic mass surveillance seem potentially legal, as Anthropic gestured at in their statement. Moreover, the laws here seem pretty fragile and easy to change: the bulk of U.S. foreign intelligence surveillance still operates under an executive order which Reagan signed in 1981, Bush expanded in 2008, and any president can unilaterally amend without a vote in Congress.
How would any restrictions against lethal autonomous weapons and domestic mass surveillance be enforced?
Notice this is Altman we’re talking about. He’s not promising the contract will not involve that (and even then it would be very far from certain), instead, he’s saying “we would ask.”
Anthropic refused to help build fully autonomous weapons or conduct domestic surveillance.
Previously, a DOW representative said Claude is the best AI model.
Therefore, presumably, DOW would only entertain a new deal with OpenAI if DOW would be allowed to use ChatGPT for surveillance + autonomous weapons. If ChatGPT has the same restrictions as Claude, then there would be no reason for DOW to use ChatGPT.
That’s not necessarily true. For example it might allow them to save face by ousting Anthropic and making an example of them while not losing all AI capabilities.
For example it might allow them to save face by ousting Anthropic and making an example of them while not losing all AI capabilities.
This is possible. The alternative hypothesis is Sam Altman is dishonest.
Given what has happened to many previous OpenAI promises, including the non-profit oversight, the resources once set aside for safety, etc, I think that we should realistically consider the possibility that OpenAI is perfectly happy sign a contract with no real safeguards while the government tries to illegally destroy their competitor.
It is not clear about OpenAI, but it was never clear about Anthropic. The news coverage has never mentioned any details about enforcement, only the words in the contract. The closest we get is the claim that the Pentagon was upset that Antropic was asking questions of Palantir, which means that Antropic doesn’t have any direct channel to learn about lines being crossed.
It is not clear if the deal OpenAI is negotiating with the DoW provides meaningful red lines against LAWs or domestic mass surveillance.
Edit based on new information: it appears it does not. It seems like “all lawful use” with examples added for clarity. Original comment below.
Reportedly: “OpenAI is pursuing a deal ‘that allows our models to be deployed in classified environments and that fits with our principles. … We would ask for the contract to cover any use except those which are unlawful or unsuited to cloud deployments, such as domestic surveillance and autonomous offensive weapons.’”
Sam Altman seems to be taking the moral high ground here and people have been patting him on the back, but I am unclear on a lot of crucial details, so I’m not ready to pop the champagne just yet.
He says “We have long believed that AI should not be used for mass surveillance or autonomous lethal weapons, and that humans should remain in the loop for high-stakes automated decisions. These are our main red lines” but the deal he’s proposing doesn’t clearly seem to actually enforce that!
One read is they’re planning to fulfill the DoW’s requests for ‘all lawful use’ and their only other restriction is whether they even have the technical capacity to meet the DoW’s requests.
If this interpretation is wrong, it would be great to get clarity on that. I recognize that these kinds of negotiations are tense and that the Pentagon probably wants to save face. But I think it would be premature to congratulate OpenAI before we have actually confirmed that they’re not just caving in and spinning it as though they’re not.
Here are some questions where if I got clear answers that would ameliorate my concerns:
Who determines if these models are “unsuited for cloud deployment”? The DoW or OpenAI? How would they make that determination?
What makes mass domestic surveillance and/or lethal autonomous weapons “unsuitable for cloud deployment”? Is that likely to change?
Who determines if use is “unlawful” in this agreement, OpenAI or the DoW, using the same mechanisms they would use for any other lawful use agreement?
Would this allow types of domestic mass surveillance that Anthropic’s red lines would have ruled out? “Mass surveillance” isn’t a legal term, so when the Department of War says it’s illegal, it’s not obvious what they mean. Some kinds of things I’d consider domestic mass surveillance seem potentially legal, as Anthropic gestured at in their statement. Moreover, the laws here seem pretty fragile and easy to change: the bulk of U.S. foreign intelligence surveillance still operates under an executive order which Reagan signed in 1981, Bush expanded in 2008, and any president can unilaterally amend without a vote in Congress.
How would any restrictions against lethal autonomous weapons and domestic mass surveillance be enforced?
Notice this is Altman we’re talking about. He’s not promising the contract will not involve that (and even then it would be very far from certain), instead, he’s saying “we would ask.”
Anthropic refused to help build fully autonomous weapons or conduct domestic surveillance.
Previously, a DOW representative said Claude is the best AI model.
Therefore, presumably, DOW would only entertain a new deal with OpenAI if DOW would be allowed to use ChatGPT for surveillance + autonomous weapons. If ChatGPT has the same restrictions as Claude, then there would be no reason for DOW to use ChatGPT.
That’s not necessarily true. For example it might allow them to save face by ousting Anthropic and making an example of them while not losing all AI capabilities.
This is possible. The alternative hypothesis is Sam Altman is dishonest.
Given what has happened to many previous OpenAI promises, including the non-profit oversight, the resources once set aside for safety, etc, I think that we should realistically consider the possibility that OpenAI is perfectly happy sign a contract with no real safeguards while the government tries to illegally destroy their competitor.
It is not clear about OpenAI, but it was never clear about Anthropic. The news coverage has never mentioned any details about enforcement, only the words in the contract. The closest we get is the claim that the Pentagon was upset that Antropic was asking questions of Palantir, which means that Antropic doesn’t have any direct channel to learn about lines being crossed.
Seems fairly clear to me.