New report: Safety Cases for AI

Link post

ArXiv paper: https://​​arxiv.org/​​abs/​​2403.10462

The idea for this paper occurred to me when I saw Buck Shlegeris’ MATS stream on “Safety Cases for AI.” How would one justify the safety of advanced AI systems? This question is fundamental. It informs how RSPs should be designed and what technical research is useful to pursue.

For a long time, researchers have (implicitly or explicitly) discussed ways to justify that AI systems are safe, but much of this content is scattered across different posts and papers, is not as concrete as I’d like, or does not clearly state their assumptions.

I hope this report provides a helpful birds-eye view of safety arguments and moves the AI safety conversation forward by helping to identify assumptions they rest on (though there’s much more work to do to clarify these arguments).

Thanks to my coauthors: Nick Gabrieli, David Krueger, and Thomas Larsen—and to everyone who gave feedback: Henry Sleight, Ashwin Acharya, Ryan Greenblatt, Stephen Casper, David Duvenaud, Rudolf Laine, Roger Grosse, Hjalmar Wijk, Eli Lifland, Oliver Habryka, Sim ́eon Campos, Aaron Scher, Lukas Berglund, and Nate Thomas.