Here’s a list of my donations so far this year (put together as part of thinking through whether I and others should participate in an OpenAI equity donation round).
They are roughly in chronological order (though it’s possible I missed one or two). I include some thoughts on what I’ve learned and what I’m now doing differently at the bottom.
This grant was largely motivated by my respect for Oliver Habryka’s quality of thinking and personal judgment.
This ended up being matched by the Survival and Flourishing Fund (though I didn’t know it would be when I made it). Note that they’ll continue matching donations to Lightcone until the end of March 2026.
~$25k to Inference Magazine to host a public debate on the plausibility of the intelligence explosion in London.
$100k to Apart Research, who run hackathons where people can engage with AI safety research in a hands-on way (technically made with my regranting funds from Manifund, though I treated it like a 100k boost to my own donation budget)
Janus and their collaborators are doing the kind of creative thinking and experimentation that has a genuine chance of leading to new paradigms for understanding AI. See for instance this discussion of AI identities.
I’ve found my conversations with Sahil extremely generative. He’s one of the researchers I’ve talked to with the most ambitious and philosophically coherent “overall vision” for the future of AI. I still feel confused about how likely his current plans are to actualize that vision (and there are also some points where it’s in tension with my own overall vision) but it definitely seems worth betting on.
Total so far: ~$460k (of which $360k was my own money, and $100k Manifund’s money).
Note that my personal donations this year are >10x greater than any previous year; this is because I cashed out some of my OpenAI equity for the first time. So this is the first year that I’ve invested serious time and energy into donating. What have I learned?
My biggest shift is from thinking of myself as donating “on behalf of the AI safety community” to specifically donating to things that I personally am unusually excited about. I have only a very small proportion of the AI safety community’s money; also, I have fairly idiosyncratic views that I’ve put a lot of time into developing. So I now want to donate in a way which “bets on” my research taste, since that’s the best way to potentially get outsized returns. More concretely:
I’d classify the grants to Apart Research and the Inference Magazine debate as things that I “thought the community as a whole should fund”. If I were making those decisions today, I’d fund Apart Research significantly less (maybe $50k?) and not fund the debate (also because I’ve updated away from public outreach as a valuable strategy).
I consider my donations to ACS, Janus and Sahil as leveraging my research taste: these are some of the people who I have the most productive research discussions with. I’m excited about others donating to them too.
My grants to Lighthaven and Alexander Gietelink Oldenziel are somewhere in between those two categories. I’m still excited about them, though I’m now a bit more skeptical about conferences/workshops in general as a thing I want to support (there are so many conferences, are people actually getting value out of them or mainly using them as a way to feel high-status?) However this is less of a concern for agent foundations conferences, and also the sort of thing that I trust Oliver to track and account for.
My political views are unusual enough that I haven’t yet figured out a great way to fund them. Palladium is in the right broad direction but not focused enough on my particular interests for me to want to fund at scale (and again is more of a “someone should fund it” type thing). Regardless, I’m uninterested in almost all of the AI governance interventions others in the community are funding.
Even more recently, I’ve decided that I can bet on my research taste most effectively by simply hiring research assistants to work for me. I’m uncertain how much this will cost me, but if it goes well it’ll be most of my “donation” budget for the next year. I could potentially get funding for this, but at least to start off with, it feels valuable to not be beholden to any external funders.
More generally, I’d be excited if more people who are wealthy from working at AI labs used that money to make more leveraged bets on their own research (e.g. by working independently and hiring collaborators). This seems like a good way to produce the kinds of innovative research that are hard to incentivize under other institutional setups. I’m currently writing a post elaborating on this intuition.
What are the best places to start reading about why you are uninterested in almost all commonly proposed AI governance interventions, and about the AI governance interventions you are interested in? I imagine the curriculum sheds some light on this, but it’s quite long.
This seems like both a good process, using your existing knowledge to find good opportunities rather than doing normal applications seems in line with my guess at how high EV grants her name, and a set of grantees I am generally glad to see funded.
Here’s a list of my donations so far this year (put together as part of thinking through whether I and others should participate in an OpenAI equity donation round).
They are roughly in chronological order (though it’s possible I missed one or two). I include some thoughts on what I’ve learned and what I’m now doing differently at the bottom.
$100k to Lightcone
This grant was largely motivated by my respect for Oliver Habryka’s quality of thinking and personal judgment.
This ended up being matched by the Survival and Flourishing Fund (though I didn’t know it would be when I made it). Note that they’ll continue matching donations to Lightcone until the end of March 2026.
$50k to the Alignment of Complex Systems (ACS) research group
This grant was largely motivated by my respect for Jan Kulveit’s philosophical and technical thinking.
$20k to Alexander Gietelink Oldenziel for support with running agent foundations conferences.
~$25k to Inference Magazine to host a public debate on the plausibility of the intelligence explosion in London.
$100k to Apart Research, who run hackathons where people can engage with AI safety research in a hands-on way (technically made with my regranting funds from Manifund, though I treated it like a 100k boost to my own donation budget)
$50k to Janus
Janus and their collaborators are doing the kind of creative thinking and experimentation that has a genuine chance of leading to new paradigms for understanding AI. See for instance this discussion of AI identities.
$15k to Palladium
They are doing good thinking about governance and politics on a surprisingly tight budget.
$100k to Sahil to support work on live theory at groundless.ai
I’ve found my conversations with Sahil extremely generative. He’s one of the researchers I’ve talked to with the most ambitious and philosophically coherent “overall vision” for the future of AI. I still feel confused about how likely his current plans are to actualize that vision (and there are also some points where it’s in tension with my own overall vision) but it definitely seems worth betting on.
Total so far: ~$460k (of which $360k was my own money, and $100k Manifund’s money).
Note that my personal donations this year are >10x greater than any previous year; this is because I cashed out some of my OpenAI equity for the first time. So this is the first year that I’ve invested serious time and energy into donating. What have I learned?
My biggest shift is from thinking of myself as donating “on behalf of the AI safety community” to specifically donating to things that I personally am unusually excited about. I have only a very small proportion of the AI safety community’s money; also, I have fairly idiosyncratic views that I’ve put a lot of time into developing. So I now want to donate in a way which “bets on” my research taste, since that’s the best way to potentially get outsized returns. More concretely:
I’d classify the grants to Apart Research and the Inference Magazine debate as things that I “thought the community as a whole should fund”. If I were making those decisions today, I’d fund Apart Research significantly less (maybe $50k?) and not fund the debate (also because I’ve updated away from public outreach as a valuable strategy).
I consider my donations to ACS, Janus and Sahil as leveraging my research taste: these are some of the people who I have the most productive research discussions with. I’m excited about others donating to them too.
My grants to Lighthaven and Alexander Gietelink Oldenziel are somewhere in between those two categories. I’m still excited about them, though I’m now a bit more skeptical about conferences/workshops in general as a thing I want to support (there are so many conferences, are people actually getting value out of them or mainly using them as a way to feel high-status?) However this is less of a concern for agent foundations conferences, and also the sort of thing that I trust Oliver to track and account for.
My political views are unusual enough that I haven’t yet figured out a great way to fund them. Palladium is in the right broad direction but not focused enough on my particular interests for me to want to fund at scale (and again is more of a “someone should fund it” type thing). Regardless, I’m uninterested in almost all of the AI governance interventions others in the community are funding.
Even more recently, I’ve decided that I can bet on my research taste most effectively by simply hiring research assistants to work for me. I’m uncertain how much this will cost me, but if it goes well it’ll be most of my “donation” budget for the next year. I could potentially get funding for this, but at least to start off with, it feels valuable to not be beholden to any external funders.
More generally, I’d be excited if more people who are wealthy from working at AI labs used that money to make more leveraged bets on their own research (e.g. by working independently and hiring collaborators). This seems like a good way to produce the kinds of innovative research that are hard to incentivize under other institutional setups. I’m currently writing a post elaborating on this intuition.
What are the best places to start reading about why you are uninterested in almost all commonly proposed AI governance interventions, and about the AI governance interventions you are interested in? I imagine the curriculum sheds some light on this, but it’s quite long.
>> ” Even more recently, I’ve decided that I can bet on my research taste most effectively by simply hiring research assistants to work for me. ”
This seems right to me—plausibly a massive multiplier.
For example, John Wentworth told me he found a large productivity boost when he started working David Lorell.
This seems like both a good process, using your existing knowledge to find good opportunities rather than doing normal applications seems in line with my guess at how high EV grants her name, and a set of grantees I am generally glad to see funded.
*Lighthaven->Lightcone (at least in the case of SFF matching)
Ty, fixed.