Thanks for writing this; I imagine it’s a tricky subject to speak on. I broadly agree with the first and last sections of your post, but I have several questions and quibbles with the section on OpenAI’s deal with the Department of War.
You’re placing a lot of faith in the understanding between OpenAI and the DoW:
I feel that too much of the focus has been on the “legalese”, with people parsing every word of the contract excerpts we posted. I do not dispute the importance of the contract, but as Thomas Jefferson said “The execution of the laws is more important than the making of them.” The importance of a contract is a shared understanding between OpenAI and the DoW on what the models will and will not be used to do.
I don’t understand why you think the DoW will act in good faith. Their interactions with Anthropic seem outlandishly, dangerously bad faith. Read this tweet from the DoW’s director and tell me if that sounds like someone you can come to reliable shared understanding with? And more broadly, when you look at the conduct of the current administration, do you believe they will not push boundaries, overreach, and interpret statements in disingenuous ways?
While I think shared understanding is valuable, I think the main point of a contract is to have options for legal redress or enforcement if that shared understanding is violated: when I signed a lease with my landlord, we had a shared understanding that he’d fix the dishwasher if it broke. When he didn’t actually fix the dishwasher, I was very glad I had a contract with some legal remedies.
For this contract to be meaningful, it seems to me like it at a minimum[1] needs to be airtight enough that the DoW won’t be able to weasel out of it in court even when they’re arguing hard and trying to exploit every loophole. As I say in my recent post, “As long as one party to the contract insists that they haven’t given up anything beyond what’s already illegal, and their reading is (by a stretch) consistent with the language in the contract, there will be ambiguity about whether anything more is required.”
This will involve having to wade through some legalese. My recent Less Wrong post has a section where I give some examples of legal language that looks like it does one thing but in fact does another.
If the contract language is never clarified, it will be disproportionately effective at preventing OpenAI from asserting its rights. In the announcement, OpenAI writes “As with any contract, we could terminate it if the counterparty violates the terms.” But will OpenAI be willing to do that if there’s a 50% chance that courts won’t side with them? What about 20%? If OpenAI terminates a contract and then loses in court, they could be forced to pay extremely high costs in damages. Better legal language would help OpenAI win a court battle if the DoW violates the contract.
It might also not be possible for OpenAI to terminate the contract if the government is caught in breach of the shared understanding, unless the contract language makes clear that the terms were violated:
Jessica Tillipman, a legal expert on government procurement law, writes “I’m also curious about OpenAI’s recourse if the govt crosses a red line. In govt contracts, a contractor can’t just terminate for govt breach (w/ limited exception). If this is an OT [Other Transactions, a particular type of procurement] agreement, they may have negotiated broader termination rights, but we don’t know that.”
Overall do you disagree? Maybe you think OpenAI has some other leverage than the courts here I’m not accounting for?
Bear in mind the DoW reportedly wants to use LLMs to conduct mass domestic surveillance and their senior officials have repeatedly made statements to the effect of “We will not let ANY company dictate the terms regarding how we make operational decisions.”
I also worry you’re too optimistic about other parts of this situation as well. For example you mention safeguards:
It allows us to build in our safety stack to ensure the safe operation of the model and our red lines, as well as have our own forward deployed engineers (FDEs) in place. No safety stack can be perfect, but given the “mass” nature of mass surveillance, it does not need to be perfect to prevent it.
On technical safeguards in general: To the extent you rely on technical safeguards with no legal backing, it seems like you are setting yourself up for the DoW to try to ’jailbreak
But overall, quibbling over these kinds of contract details isn’t as important as getting some external party, or at least a large number of employees, the ability to look at the full contract to decide what it does or doesn’t permit. Boaz, did you get to read the full contract? If not, how can you be so confident about what it says or implies when OpenAI leadership has been mistaken about that with regards to this contract a few times before and the base rate for contracts, including lead clauses that substantially undermine or weaken earlier clauses, is really high.
- ^
Ideally the contract would also include enforcement mechanisms to detect breaches of contract and good remedies if there is a breach of contract!
- ^
If you don’t have contractual rights, it’s perfectly legal for the DoW to jailbreak your models. ZDR would prevent you from learning about it, and they wouldn’t tell your forward-deployed engineers.
Thanks so much. I fixed it. I’m not sure how that happened honestly.