Many talented lawyers do not contribute to AI Safety, simply because they’ve never had a chance to work with AIS researchers or don’t know what the field entails.
I am hopeful that this can improve if we create more structured opportunities for cooperation. And this is the main motivation behind the upcoming AI Safety Law-a-thon, organised by AI-Plans:[1]
A hackathon where every team pairs one lawyer with one technical AI safety researcher. Each pair will tackle challenges drawn up from real legal bottlenecks and overlooked AI safety risks.
From my time in the tech industry, my suspicion is that if more senior counsel actually understood alignment risks, frontier AI deals would face far more scrutiny. Right now, most law firms would focus on IP rights or privacy clauses when giving advice to their clients- not on whether model alignment drift could blow up the contract six months after signing.
We launched the event one day ago, and we already have an impressive lineup of senior counsel from top firms and regulators. What we still need are technical AI safety people to pair with them!
If you join, you’ll help stress-test the legal scenarios and point out the alignment risks that are not salient to your counterpart (they’ll be obvious to you, but not to them).
You’ll also get the chance to put your own questions to experienced attorneys.
📅 25–26 October 🌍 Hybrid: online + in-person (London)
NOTE: I really want to improve how I communicate updates like these. If this sounds too salesy or overly persuasive, it would really help me if you comment and suggest how to improve the wording.
I find this more effective than just downvoting- but of course, do so if you want. Thank you in advance!.
This seems like a pretty cool event and I’m excited it’s happening.
That said, I’ve removed this Quick Take from the frontpage. Advertising, whether for events or for role openings or similar, is generally not something we want on the frontpage of LessWrong.
In this case, now that it’s off the front page, this shortform might be insufficiently visible. I’d encourage you to make a top-level post / event about it, which will get put on personal, but might still be a bit more visible.
Hm, I found this ad valuable, and now I wonder whether the LessWrong team has considered a special classifieds category of posts, separate from personal blog posts and frontpages.
Oh, hm. That’s not the sort of things users follow-through on in my experience. Not saying that this makes Classified a bad idea, but I think it needs a different UI solution (e.g. appearing in the sidebar).
Hi Kave! Thanks for letting me know, and for providing an explanation! I have now created an event and a personal long form post explaining what the event is about. I am really hoping that enough technical AI Safety researchers sign up, fingers crossed :).
Thanks for this! Sure. Without revealing identires or specific affiliations: We have attorneys who consult for big tech companies (fortune 500, big labs...). We also have in-house counsel multinationals. And also government lawyers / people advising regulatory bodies and policymaking.
Honestly, I’m surprised by the reception. I think it’ll be a great opportunity for both technical and legal profiles to network and exchange knowledge.
Many talented lawyers do not contribute to AI Safety, simply because they’ve never had a chance to work with AIS researchers or don’t know what the field entails.
I am hopeful that this can improve if we create more structured opportunities for cooperation. And this is the main motivation behind the upcoming AI Safety Law-a-thon, organised by AI-Plans:[1]
A hackathon where every team pairs one lawyer with one technical AI safety researcher. Each pair will tackle challenges drawn up from real legal bottlenecks and overlooked AI safety risks.
From my time in the tech industry, my suspicion is that if more senior counsel actually understood alignment risks, frontier AI deals would face far more scrutiny. Right now, most law firms would focus on IP rights or privacy clauses when giving advice to their clients- not on whether model alignment drift could blow up the contract six months after signing.
We launched the event one day ago, and we already have an impressive lineup of senior counsel from top firms and regulators. What we still need are technical AI safety people to pair with them!
If you join, you’ll help stress-test the legal scenarios and point out the alignment risks that are not salient to your counterpart (they’ll be obvious to you, but not to them).
You’ll also get the chance to put your own questions to experienced attorneys.
📅 25–26 October
🌍 Hybrid: online + in-person (London)
If you’re up for it, sign up here: https://luma.com/8hv5n7t0
Feel free to DM me if you want to raise any queries!
NOTE: I really want to improve how I communicate updates like these. If this sounds too salesy or overly persuasive, it would really help me if you comment and suggest how to improve the wording.
I find this more effective than just downvoting- but of course, do so if you want. Thank you in advance!.
This seems like a pretty cool event and I’m excited it’s happening.
That said, I’ve removed this Quick Take from the frontpage. Advertising, whether for events or for role openings or similar, is generally not something we want on the frontpage of LessWrong.
In this case, now that it’s off the front page, this shortform might be insufficiently visible. I’d encourage you to make a top-level post / event about it, which will get put on personal, but might still be a bit more visible.
Hm, I found this ad valuable, and now I wonder whether the LessWrong team has considered a special classifieds category of posts, separate from personal blog posts and frontpages.
Your feedback was super useful for me! I created this event. If you want and have time, would you mind sending me a DM with any other thoughts you have? Thank you! https://www.lesswrong.com/events/rRLPycsLdjFpZ4cKe/ai-safety-law-a-thon-we-need-more-technical-ai-safety
Classified does seem kind of cool! Do you expect you would upweight “classified” higher than “personal” in your tag filters?
I think I would do that selectively, when I have the time, energy, or need for such ads.
Oh, hm. That’s not the sort of things users follow-through on in my experience. Not saying that this makes Classified a bad idea, but I think it needs a different UI solution (e.g. appearing in the sidebar).
Hi Kave! Thanks for letting me know, and for providing an explanation! I have now created an event and a personal long form post explaining what the event is about. I am really hoping that enough technical AI Safety researchers sign up, fingers crossed :).
Can you be more concrete about who is in the impressive lineup? I understand privacy is a factor here, so just give the information you can.
Thanks for this! Sure. Without revealing identires or specific affiliations: We have attorneys who consult for big tech companies (fortune 500, big labs...). We also have in-house counsel multinationals. And also government lawyers / people advising regulatory bodies and policymaking.
Honestly, I’m surprised by the reception. I think it’ll be a great opportunity for both technical and legal profiles to network and exchange knowledge.