What is the most effective way to donate to AGI XRisk mitigation?
There are now many organizations in field of Existential Risk from Artificial General Intelligence. I wonder which can make the most effective use of small donations.
My priority is on mathematical or engineering research aimed at XRisk from superhuman AGI.
My donations go to MIRI, and for now it looks best to me, but I will appreciate thoughtful assessment.
Machine Intelligence Research Institute pioneered AGI XRisk mitigation (alongside FHI, below) and does foundational research. Their approach aims to avoid a rush to implementation of an AGI which includes unknown failure modes.
Alignment Research Center: Paul Christiano’s new organization. He has done impressive research and has worked with both academia and MIRI.
Center for Human-Compatible Artificial Intelligence at Berkeley: If you’re looking to sponsor academia rather than an independent organization, this one does research that combines mainstream AI methods with serious consideration of XRisk.
Future of Humanity Institute at Oxford is a powerhouse in multiple relevant areas, including AGI XRisk Research.
The Centre for the Study of Existential Risk at Cambridge. Looks promising. I haven’t seen much on AGI XRisk from them.
Leverhulme Center for the Future of Intelligence, also at Cambridge and linked to CSER
Smaller organizations with scope that goes beyond AGI XRisk. I don’t know much about them otherwise.
Donating money to a grant-disbursing organization makes sense if you believe that they have better ability to determine effectiveness than you. Alternatively, you might be guided by their decisions as you make your own donations.
Future of Life Institute: It’s not clear if they still actively contribute to AI XRisk research, but did disburse grants a few years ago.
Are there others?