When signing an enterprise contract with OpenAI, almost all the liability is passed onto them. What are specific risk scenarios/damages that they could face, which they can use to build a countersuit.
Potentially, also for justify for negotiating a better contract, either with OpenAI (unlikely, since OpenAI seems to very rarely negotiate) or another AI company that takes more of the liability (which requires increasing funding for safety, evals, etc). Or, seeing if there are non AI solutions that can do what they want (e.g. a senior person at a rail company sincerely asked me ‘we need to copy and paste stuff from our CRM to Excel a lot, do you think an AI Agent could help with that?‘). Had a few interactions like this. It seems that for a lot of businesses atm, what they are spending on ‘AI solutions’ can be done cheaper, faster and more reliably with normal software, but they don’t really know what software is.
I don’t know that we have much expertise on this sort of thing—we’re mostly worried about X-risk, which it doesn’t really make sense to talk about liability for in a legal sense.
What kind of knowledge specifically are these lawyers looking for?
When signing an enterprise contract with OpenAI, almost all the liability is passed onto them. What are specific risk scenarios/damages that they could face, which they can use to build a countersuit.
Potentially, also for justify for negotiating a better contract, either with OpenAI (unlikely, since OpenAI seems to very rarely negotiate) or another AI company that takes more of the liability (which requires increasing funding for safety, evals, etc).
Or, seeing if there are non AI solutions that can do what they want (e.g. a senior person at a rail company sincerely asked me ‘we need to copy and paste stuff from our CRM to Excel a lot, do you think an AI Agent could help with that?‘). Had a few interactions like this. It seems that for a lot of businesses atm, what they are spending on ‘AI solutions’ can be done cheaper, faster and more reliably with normal software, but they don’t really know what software is.
I don’t know that we have much expertise on this sort of thing—we’re mostly worried about X-risk, which it doesn’t really make sense to talk about liability for in a legal sense.