Buying AI labor might be a big deal for philanthropists.
I think the total available for AI safety philanthropy is almost $100B (at current valuations), mostly from Anthropic.[1] The AI safety nonprofit ecosystem currently consumes about $1B per year. There are still good opportunities available, but they’re several times worse than the average spending (because the low-hanging fruit has been plucked[2]). Marginal effectiveness would likely decline by ~half again if you doubled the rate of AI safety philanthropy.
So there’s likely ~30x more money than can be spent on funding AI safety orgs.[3] A priori I expected there would be various decent ways to spend large amounts of money, but I’m aware of few promising proposals.
There are two obvious buckets that might be able to absorb ~unlimited amounts of money well:
During an intelligence explosion, buying AI inference to do AI safety work
After an intelligence explosion, if (a) pre-superintelligence property rights persist and (b) you can turn wealth into control of distant galaxies, buy control of distant galaxies
2 looks worse than AI safety philanthropy on current margins (even if you can invest so well that your investments grow by 100x as a fraction of global wealth by superintelligence-time in expectation) — you shouldn’t save for it on current margins, but it could become competitive if AI safety philanthropy increases and after an intelligence explosion there may be nothing else for altruists to spend money on. A collaborator and I hope to publish our analysis on this topic in May.
1 is very uncertain. Even if buying AI labor is important, that doesn’t necessarily mean we can spend present-value $10B+ effectively. Some people are trying to investigate this topic.
Regardless, one upshot for philanthropists is to spend more now — assuming funding will increase in the future and you’ll receive or be highly correlated with that, spending now is better than later or never.[4]
(Reminder: American donors can do better by donating to politics. This is just about nonprofits.)
(Claim: you can get expected returns of >100%/year by investing well. This isn’t load-bearing for any of the above.)
There’s two phenomena here: (a) there’s diminishing returns in people/org quality and (b) there’s diminishing returns in projects — even if everyone is equally skilled, going from 0 to 100 people is better than 900 to 1000 because the first people cause more important problems to be worked on.
Anthropic investors, staff, and founders aren’t able to sell their equity at will, and they likely won’t be able to until 6 months after IPO. I expect the AI safety philanthropy flow will increase from $1B/year to more like $2B/year by the time Anthropic equity becomes liquid — maybe before then as other funders plan for Anthropic money. And even without Anthropic money, in Good Ventures’s shoes I would want to spend faster.
There’s plenty of room for funding in human intelligence amplification. Easily $100 million, probably much more given some work (active grantmaking, etc.).
Buying AI labor might be a big deal for philanthropists.
I think the total available for AI safety philanthropy is almost $100B (at current valuations), mostly from Anthropic.[1] The AI safety nonprofit ecosystem currently consumes about $1B per year. There are still good opportunities available, but they’re several times worse than the average spending (because the low-hanging fruit has been plucked[2]). Marginal effectiveness would likely decline by ~half again if you doubled the rate of AI safety philanthropy.
So there’s likely ~30x more money than can be spent on funding AI safety orgs.[3] A priori I expected there would be various decent ways to spend large amounts of money, but I’m aware of few promising proposals.
There are two obvious buckets that might be able to absorb ~unlimited amounts of money well:
During an intelligence explosion, buying AI inference to do AI safety work
After an intelligence explosion, if (a) pre-superintelligence property rights persist and (b) you can turn wealth into control of distant galaxies, buy control of distant galaxies
2 looks worse than AI safety philanthropy on current margins (even if you can invest so well that your investments grow by 100x as a fraction of global wealth by superintelligence-time in expectation) — you shouldn’t save for it on current margins, but it could become competitive if AI safety philanthropy increases and after an intelligence explosion there may be nothing else for altruists to spend money on. A collaborator and I hope to publish our analysis on this topic in May.
1 is very uncertain. Even if buying AI labor is important, that doesn’t necessarily mean we can spend present-value $10B+ effectively. Some people are trying to investigate this topic.
Regardless, one upshot for philanthropists is to spend more now — assuming funding will increase in the future and you’ll receive or be highly correlated with that, spending now is better than later or never.[4]
(Reminder: American donors can do better by donating to politics. This is just about nonprofits.)
(Claim: you can get expected returns of >100%/year by investing well. This isn’t load-bearing for any of the above.)
I think about $100B. Another reasonable person thinks about $40B. We haven’t argued about it.
There’s two phenomena here: (a) there’s diminishing returns in people/org quality and (b) there’s diminishing returns in projects — even if everyone is equally skilled, going from 0 to 100 people is better than 900 to 1000 because the first people cause more important problems to be worked on.
Community funds will be invested decently, in aggregate, until they are spent, so $1B/year for 8 years only costs like $2B now.
Anthropic investors, staff, and founders aren’t able to sell their equity at will, and they likely won’t be able to until 6 months after IPO. I expect the AI safety philanthropy flow will increase from $1B/year to more like $2B/year by the time Anthropic equity becomes liquid — maybe before then as other funders plan for Anthropic money. And even without Anthropic money, in Good Ventures’s shoes I would want to spend faster.
There’s plenty of room for funding in human intelligence amplification. Easily $100 million, probably much more given some work (active grantmaking, etc.).