If legal policy is in your wheelhouse, here’s a selection of the growing literature (apologies, some of it is my own)
Noam Kolt – “Algorithmic Black Swans”
Addressing catastrophic tail events via anticipatory regulation
Published: https://journals.library.wustl.edu/lawreview/article/id/8906/Peter Salib & Simon Goldstein – “AI Rights for Human Safety”
On AI rights for safety
SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4536494Yonathan Arbel, Matthew Tokson & Albert Lin – “Systemic Regulation of Artificial Intelligence”
Moving from application-level regulation to system-level regulation
SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4543681
Published: https://arizonastatelawjournal.org/article/systemic-regulation-of-artificial-intelligence/Gabriel Weil – “Tort Law as a Tool for Mitigating Catastrophic Risk from Artificial Intelligence” Proposes reforming tort law to deter catastrophic AI risks before they materialize
SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4694006Yonathan Arbel et al. – “Open Questions in Law and AI Safety: An Emerging Research Agenda”
Sets out a research agenda for a new field of “AI Safety Law,” focusing on existential and systemic AI risks
Published: https://www.lawfaremedia.org/article/open-questions-in-law-and-ai-safety-an-emerging-research-agendaPeter Salib – “AI Outputs Are Not Protected Speech”
Argues that AI-generated outputs lack First Amendment protection, enabling stronger safety regulation
SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4481512Mirit Eyal & Yonathan Arbel – “Tax Levers for a Safer AI Future”
Proposes using tax credits and penalties to align AI development incentives with public safety
SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4528105Cullen O’Keefe, Rohan Ramakrishnan, Annette Zimmermann, Daniel Tay & David C. Winter – “Law-Following AI: Designing AI Agents to Obey Human Laws”
Suggests AI agents should be trained and constrained to follow human law, like corporate actors
SSRN: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4726207
On this part:
″
I agree with this actually”
We need to dig deeper into what open source AI is mostly like in practice. If OS AI naturally tilts defensive (including counter offensive capabilities), then yeah, both of your accounts make sense. But I’m looking at the current landscape and I think I see something different: we’ve got many models that are actively disaligned (“uncensored”) by the community, and there’s a chance that the next big GPT moment is some brilliant insight that doesn’t need massive compute and can be run from a small cloud.