I do think there are a bunch of good donation opportunities these days, especially in domains where Open Philanthropy withdrew funding recently. Some more thoughts and details here.
At the highest level, I think what the world can use most right now is a mixture of:
Clear explanations for the core arguments around AI x-risk, both so that people can poke holes in them, and because they will enable many more people who are in positions to do something about AI to do good things
People willing to publicly, with their real identity, argue that governments and society more broadly should do pretty drastic things to handle the rise of AGI
I think good writing and media production is probably at the core of a lot of this. I particularly think that writing and arguments directed at smart educated people who do not necessarily have any kind of AI or ML background is more valuable than things that are more directed at AI and ML people, mostly because there has been a lot of the latter, the incentives on engaging in discourse with them are less bad, and because I think collectively there is often a temptation to create priesthoods around various kinds of knowledge and then to insist on deferring to those priesthoods, which I think usually causes worse collective decision-making, and writing in a more accessible way helps push against that.
I think both of these things can benefit a decent amount from funding. I do think the current funding distribution landscape is pretty hard to navigate. I am on the Long Term Future Fund which in some sense is trying to address this, but IMO we aren’t really doing an amazing job at identifying and vetting opportunities here, so I am not sure whether I would recommend donations to us, but also, nobody else is doing a great job, so I am not sure.
My tentative guess is that the best choice is to spend a few hours trying to identify one or two organizations that seem particularly impactful and at least somewhat funding constrained, then make a public comment or post asking about critical thoughts from other people on those organizations, and then iterate that a few times until you find something good. This is a decent amount of work, but I don’t think there currently exist good and robust deference chains in this space that would cause you to have a reliably positive impact on things by just trusting them.
I tentatively think that writing a single essay or reasonably popular tweet under your real-identity where you express concern about AI x-risk, as a pretty successful business person, is also quite valuable. I don’t think it has to be anything huge, but I do think it’s good if it’s more than just a paragraph or a retweet. Something that people could refer to if they try to list non-crazy people who think these kinds of concerns are real, and that can meaningfully be weighed as part of the public discussion on these kinds of topics.
I do also think visiting one of the hubs where people who work on this stuff a lot tend to work is pretty valuable. You could attend LessOnline or EA Global or something in that space, and talk to people about these topics. I do think there is a risk of ending up unduly influenced by social factors and various herd mentality dynamics, but there are a lot of smart people around who spend all day thinking about what things are most helpful, and there is lots of useful knowledge to extract.
Here are some initial thoughts:
I do think there are a bunch of good donation opportunities these days, especially in domains where Open Philanthropy withdrew funding recently. Some more thoughts and details here.
At the highest level, I think what the world can use most right now is a mixture of:
Clear explanations for the core arguments around AI x-risk, both so that people can poke holes in them, and because they will enable many more people who are in positions to do something about AI to do good things
People willing to publicly, with their real identity, argue that governments and society more broadly should do pretty drastic things to handle the rise of AGI
I think good writing and media production is probably at the core of a lot of this. I particularly think that writing and arguments directed at smart educated people who do not necessarily have any kind of AI or ML background is more valuable than things that are more directed at AI and ML people, mostly because there has been a lot of the latter, the incentives on engaging in discourse with them are less bad, and because I think collectively there is often a temptation to create priesthoods around various kinds of knowledge and then to insist on deferring to those priesthoods, which I think usually causes worse collective decision-making, and writing in a more accessible way helps push against that.
I think both of these things can benefit a decent amount from funding. I do think the current funding distribution landscape is pretty hard to navigate. I am on the Long Term Future Fund which in some sense is trying to address this, but IMO we aren’t really doing an amazing job at identifying and vetting opportunities here, so I am not sure whether I would recommend donations to us, but also, nobody else is doing a great job, so I am not sure.
My tentative guess is that the best choice is to spend a few hours trying to identify one or two organizations that seem particularly impactful and at least somewhat funding constrained, then make a public comment or post asking about critical thoughts from other people on those organizations, and then iterate that a few times until you find something good. This is a decent amount of work, but I don’t think there currently exist good and robust deference chains in this space that would cause you to have a reliably positive impact on things by just trusting them.
I tentatively think that writing a single essay or reasonably popular tweet under your real-identity where you express concern about AI x-risk, as a pretty successful business person, is also quite valuable. I don’t think it has to be anything huge, but I do think it’s good if it’s more than just a paragraph or a retweet. Something that people could refer to if they try to list non-crazy people who think these kinds of concerns are real, and that can meaningfully be weighed as part of the public discussion on these kinds of topics.
I do also think visiting one of the hubs where people who work on this stuff a lot tend to work is pretty valuable. You could attend LessOnline or EA Global or something in that space, and talk to people about these topics. I do think there is a risk of ending up unduly influenced by social factors and various herd mentality dynamics, but there are a lot of smart people around who spend all day thinking about what things are most helpful, and there is lots of useful knowledge to extract.