Hi, I’m hosting an AI Safety Law-a-Thon on October 25th to 26th. Will be pairing up AI Safety researchers with lawyers to share knowledge and brainstorm risk scenarios. If you’ve ever talked/argued about p doom and know what a mesaoptimizer is, then you’ve already done something very similar to this. Main difference here is that you’ll be able to reduce p doom in this one! Many of the lawyers taking part are from top, multi-billion dollar companies, advisors to governments, etc. And they know essentially nothing about alignment. You might be concerned that you don’t have some legal expertise that you think you need to contribute. You do **not** need any more knowledge than you already have. I guarantee you that if you’ve read more than 2 AI Safety papers then you can absolutely contribute a lot! We’re giving free in person tickets to all AI Safety researchers: https://luma.com/8hv5n7t0 Please register. This is a chance to massive reduce the money going to OpenAI and other top labs by billions of dollars, increase the investment to AI Safety as a whole and for you to learn a way to communicate what you already know about alignment risks in a way that companies will pay a *lot* for. And to make contacts with people at the companies who will pay for it.
When signing an enterprise contract with OpenAI, almost all the liability is passed onto them. What are specific risk scenarios/damages that they could face, which they can use to build a countersuit.
Potentially, also for justify for negotiating a better contract, either with OpenAI (unlikely, since OpenAI seems to very rarely negotiate) or another AI company that takes more of the liability (which requires increasing funding for safety, evals, etc). Or, seeing if there are non AI solutions that can do what they want (e.g. a senior person at a rail company sincerely asked me ‘we need to copy and paste stuff from our CRM to Excel a lot, do you think an AI Agent could help with that?‘). Had a few interactions like this. It seems that for a lot of businesses atm, what they are spending on ‘AI solutions’ can be done cheaper, faster and more reliably with normal software, but they don’t really know what software is.
I don’t know that we have much expertise on this sort of thing—we’re mostly worried about X-risk, which it doesn’t really make sense to talk about liability for in a legal sense.
Hi, I’m hosting an AI Safety Law-a-Thon on October 25th to 26th. Will be pairing up AI Safety researchers with lawyers to share knowledge and brainstorm risk scenarios. If you’ve ever talked/argued about p doom and know what a mesaoptimizer is, then you’ve already done something very similar to this.
Main difference here is that you’ll be able to reduce p doom in this one! Many of the lawyers taking part are from top, multi-billion dollar companies, advisors to governments, etc. And they know essentially nothing about alignment. You might be concerned that you don’t have some legal expertise that you think you need to contribute. You do **not** need any more knowledge than you already have. I guarantee you that if you’ve read more than 2 AI Safety papers then you can absolutely contribute a lot!
We’re giving free in person tickets to all AI Safety researchers: https://luma.com/8hv5n7t0
Please register. This is a chance to massive reduce the money going to OpenAI and other top labs by billions of dollars, increase the investment to AI Safety as a whole and for you to learn a way to communicate what you already know about alignment risks in a way that companies will pay a *lot* for. And to make contacts with people at the companies who will pay for it.
What kind of knowledge specifically are these lawyers looking for?
When signing an enterprise contract with OpenAI, almost all the liability is passed onto them. What are specific risk scenarios/damages that they could face, which they can use to build a countersuit.
Potentially, also for justify for negotiating a better contract, either with OpenAI (unlikely, since OpenAI seems to very rarely negotiate) or another AI company that takes more of the liability (which requires increasing funding for safety, evals, etc).
Or, seeing if there are non AI solutions that can do what they want (e.g. a senior person at a rail company sincerely asked me ‘we need to copy and paste stuff from our CRM to Excel a lot, do you think an AI Agent could help with that?‘). Had a few interactions like this. It seems that for a lot of businesses atm, what they are spending on ‘AI solutions’ can be done cheaper, faster and more reliably with normal software, but they don’t really know what software is.
I don’t know that we have much expertise on this sort of thing—we’re mostly worried about X-risk, which it doesn’t really make sense to talk about liability for in a legal sense.