In this report (also available here as a PDF) I have attempted to put together the most compelling case for why the development of artificial general intelligence (AGI) might pose an existential threat. It stems from my dissatisfaction with existing arguments about the potential risks from AGI. Early work tends to be less relevant in the context of modern machine learning; more recent work is scattered and brief. I originally intended to just summarise other people’s arguments, but as this report has grown, it’s become more representative of my own views and less representative of anyone else’s. So while it covers the standard ideas, I also think that it provides a new perspective on how to think about AGI—one which doesn’t take any previous claims for granted, but attempts to work them out from first principles.