If you read the full post on the EA Forum, I think I addressed this clearly. They could pick a half dozen ready-to-go paths, obviously including Givewell, with stated room for billions in funding, Givedirectly, with stated room for well over $1b, and Coeff, which is now explicitly set up to support large cause-area-specific commitments. And they can do all of this while ramping up internal capacity.
And all of that is without drastically lowering the bar for funding to, say, $10k/life saved instead of $5k—which I’d bet would mean those orgs have room for closer to $50b/year in funding. Or simply re-reviewing the > $20b in funding cut by USAID and picking their favorite 50%.
So I disagree, and claim it’s not nearly as hard to do this if they care about getting things done ASAP, since they’re actually rich enough to give ten billion this year, and double that next year, and do it again the year after, and double yet again the year after that—while still having capital left even if somehow the equity completely fails to appreciate in value—and they have a magically aligned ASI they’ve created before 2030 to fix everything else, or we’re all dead and at least they made things slightly better for people before that.
This might be true but the arguments here (and on other post in it’s current form) are pretty handwavy and I don’t find compelling and seem like a kind of naive philanthropy argument I’ve long found sus.
I didn’t really present arguments either, so, here’s a rough pass at that. I don’t think a quick back-and-forth will resolve things, this is like multiple posts worth of analysis to figure out what’s true in 2026.
But roughly my understanding is:
1) We don’t have good data on cost effectiveness
The raw data on cost effectiveness is sparse, confusedly measuring different things, doesn’t generalize as much as you’d like.
I would bet against there existing a stream of projects at $10k/life or whatever that are reasonably vetted and you could scale up to meet. Figuring out a half-decent estimate for a given project that you can compare against other projects, is a big endeavor.
The world is full of complex interplays, hidden gotchas, etc.
...
2) The skill of making sense of what data we have (or creating new data) is epistemically difficult
Givewell and OpenPhil/CoefficientGiving took a long time to build up the capacity to give away a billion a year. (Looks like OpenPhil gave away $1 billion for the first time last year, Claude Link with some citations) It involved lots of trial and error. It involved having early founders who really deeply cared about doing a good job with this.
The OpenAI nonprofit is not going to be starting off with a deep epistemic commitment to engaging with the real problem. (or, that’d be very surprising).
If CoefficientGiving isn’t giving away more than $1 billion / year so far, I don’t think you should expect them to be able to absorb $25 billion.
...
3) You can’t just hire experts for this
Because hiring experts is generally hard (because you don’t have the taste to tell the difference, and because few people even have this skill in the first place).
...
4) When you give away billions, you create an ecosystem of distortions and fraud around you that make it even more epistemically difficult.
Citation needed, exactly how bad this is is confusing, we can dig more into it but it’s my default expectation.
(I’m particularly worried about trying to give away billions for AI safety because it will dramatically increase the fraud / confused mediocre research signal/noise ratio. You didn’t particularly advocate that but wanted that as part of the convo)
...
There is not an off-the-shelf “donate $25 billion in way that is pretty-good” action that people can take. It requires a lot of skilled, intense attention.
I’m not arguing that this is easy, I was not trying to make a formal argument that would explain half of what EA is trying to figure out in a single post, and I agree that we won’t resolve all of this, but there is room for converging on some of this.
1) Agree, the data isn’t great. As I always say, the hard apart of decision making under uncertainty is the uncertainty. BUT that doesn’t mean you don’t have good ways of making those decisions anyways.
2&3) You don’t need to start from zero, and you absolutely can hire people; OpenAI hired away Jacob Trefethen from OpenPhil/Coeff, and they could similarly hire folks from Gates Foundation, USAID, etc. The people they hire have track records, and you don’t need nearly as much taste to look at what they actually donated to, or the analyses of lives saved from those programs. And the biggest part of the reason OpenPhil is giving money slowly is because Dustin doesn’t want to spend the money faster; if he did, they could lower the bar for programs, and/or give more to GiveWell, which really is open to getting much more funding..
4) Absolutely. And giving in areas that aren’t well established, like AI safety, or where there are huge information asymmetries, like most academic research, is hard. But doing so in areas like global poverty or public health or infectious disease work is much easier, and there are mature ecosystems for evaluating impact. But that said, yes, there will be fraud and waste, and you should work to minimize it—but the burden of keeping fraud or waste to 0.1% is more than two orders of magnitude larger than what is needed in keeping it below 10%, and given the stakes and timelines, that’s a tradeoff to make.
As to your final comment, I agree there is not an off-the-shelf “donate $25 billion in way that is pretty-good” action, and yes, it requires a lot of skilled, intense attention. But that is a thing you can, in fact, buy, or hire, and as we know, money is the unit of caring—not just caring about the thing, but caring about the ability to do the thing.
But there absolutely is a “donate at least $2-3 billion in way that is pretty-good” action—specifically, fully fund Givewell’s request, and fully fund GiveDirectly’s room for funding. Is this bulletproof? No. Are there perfect actions? No. But delaying is incredibly costly, and yes, if you commit to giving over a hundred billion dollars, you do have a moral, ethical, and legal duty to get off your ass and do it.
(And there WAS a “donate $25 billion in way that is clearly very good if not necessarily as effective as we normally expect in EA” action available a year ago, when they committed the money, which would have been to fund all the global health projects that got dropped by USAID for, say, a 2-year ramp down period to ensure that they don’t get broken or other funders or programs get step in.)
If you read the full post on the EA Forum, I think I addressed this clearly. They could pick a half dozen ready-to-go paths, obviously including Givewell, with stated room for billions in funding, Givedirectly, with stated room for well over $1b, and Coeff, which is now explicitly set up to support large cause-area-specific commitments. And they can do all of this while ramping up internal capacity.
And all of that is without drastically lowering the bar for funding to, say, $10k/life saved instead of $5k—which I’d bet would mean those orgs have room for closer to $50b/year in funding. Or simply re-reviewing the > $20b in funding cut by USAID and picking their favorite 50%.
So I disagree, and claim it’s not nearly as hard to do this if they care about getting things done ASAP, since they’re actually rich enough to give ten billion this year, and double that next year, and do it again the year after, and double yet again the year after that—while still having capital left even if somehow the equity completely fails to appreciate in value—and they have a magically aligned ASI they’ve created before 2030 to fix everything else, or we’re all dead and at least they made things slightly better for people before that.
This might be true but the arguments here (and on other post in it’s current form) are pretty handwavy and I don’t find compelling and seem like a kind of naive philanthropy argument I’ve long found sus.
I didn’t really present arguments either, so, here’s a rough pass at that. I don’t think a quick back-and-forth will resolve things, this is like multiple posts worth of analysis to figure out what’s true in 2026.
But roughly my understanding is:
1) We don’t have good data on cost effectiveness
The raw data on cost effectiveness is sparse, confusedly measuring different things, doesn’t generalize as much as you’d like.
I would bet against there existing a stream of projects at $10k/life or whatever that are reasonably vetted and you could scale up to meet. Figuring out a half-decent estimate for a given project that you can compare against other projects, is a big endeavor.
The world is full of complex interplays, hidden gotchas, etc.
...
2) The skill of making sense of what data we have (or creating new data) is epistemically difficult
Givewell and OpenPhil/CoefficientGiving took a long time to build up the capacity to give away a billion a year. (Looks like OpenPhil gave away $1 billion for the first time last year, Claude Link with some citations) It involved lots of trial and error. It involved having early founders who really deeply cared about doing a good job with this.
The OpenAI nonprofit is not going to be starting off with a deep epistemic commitment to engaging with the real problem. (or, that’d be very surprising).
If CoefficientGiving isn’t giving away more than $1 billion / year so far, I don’t think you should expect them to be able to absorb $25 billion.
...
3) You can’t just hire experts for this
Because hiring experts is generally hard (because you don’t have the taste to tell the difference, and because few people even have this skill in the first place).
...
4) When you give away billions, you create an ecosystem of distortions and fraud around you that make it even more epistemically difficult.
Citation needed, exactly how bad this is is confusing, we can dig more into it but it’s my default expectation.
(I’m particularly worried about trying to give away billions for AI safety because it will dramatically increase the fraud / confused mediocre research signal/noise ratio. You didn’t particularly advocate that but wanted that as part of the convo)
...
There is not an off-the-shelf “donate $25 billion in way that is pretty-good” action that people can take. It requires a lot of skilled, intense attention.
I’m not arguing that this is easy, I was not trying to make a formal argument that would explain half of what EA is trying to figure out in a single post, and I agree that we won’t resolve all of this, but there is room for converging on some of this.
1) Agree, the data isn’t great. As I always say, the hard apart of decision making under uncertainty is the uncertainty. BUT that doesn’t mean you don’t have good ways of making those decisions anyways.
2&3) You don’t need to start from zero, and you absolutely can hire people; OpenAI hired away Jacob Trefethen from OpenPhil/Coeff, and they could similarly hire folks from Gates Foundation, USAID, etc. The people they hire have track records, and you don’t need nearly as much taste to look at what they actually donated to, or the analyses of lives saved from those programs. And the biggest part of the reason OpenPhil is giving money slowly is because Dustin doesn’t want to spend the money faster; if he did, they could lower the bar for programs, and/or give more to GiveWell, which really is open to getting much more funding..
4) Absolutely. And giving in areas that aren’t well established, like AI safety, or where there are huge information asymmetries, like most academic research, is hard. But doing so in areas like global poverty or public health or infectious disease work is much easier, and there are mature ecosystems for evaluating impact. But that said, yes, there will be fraud and waste, and you should work to minimize it—but the burden of keeping fraud or waste to 0.1% is more than two orders of magnitude larger than what is needed in keeping it below 10%, and given the stakes and timelines, that’s a tradeoff to make.
As to your final comment, I agree there is not an off-the-shelf “donate $25 billion in way that is pretty-good” action, and yes, it requires a lot of skilled, intense attention. But that is a thing you can, in fact, buy, or hire, and as we know, money is the unit of caring—not just caring about the thing, but caring about the ability to do the thing.
But there absolutely is a “donate at least $2-3 billion in way that is pretty-good” action—specifically, fully fund Givewell’s request, and fully fund GiveDirectly’s room for funding. Is this bulletproof? No. Are there perfect actions? No. But delaying is incredibly costly, and yes, if you commit to giving over a hundred billion dollars, you do have a moral, ethical, and legal duty to get off your ass and do it.
(And there WAS a “donate $25 billion in way that is clearly very good if not necessarily as effective as we normally expect in EA” action available a year ago, when they committed the money, which would have been to fund all the global health projects that got dropped by USAID for, say, a 2-year ramp down period to ensure that they don’t get broken or other funders or programs get step in.)