Some things I noticed while LARPing as a grantmaker
Written to a new grantmaker.
The first three points are the most important.
Focus on opportunities many times your bar.
Most value comes from finding/creating projects many times your bar, rather than discriminating between opportunities around your bar. If you find/create a new opportunity to donate $1M at 10x your bar (and cause it to get $1M, which would otherwise be donated to a 1x thing), you generate $9M of value (at your bar).[1] If you cause a $1M at 1.5x opportunity to get funded or a $1M at 0.5x opportunity to not get funded, you generate $500K of value. The former is 18 times as good.
You should probably be like I do research to figure out what projects should exist, then make them exist rather than I evaluate the applications that come to me. That said, most great ideas come from your network, not from your personal brainstorming.
In some buckets of opportunities, the low-hanging fruit will be plucked. In others, nobody’s on the ball and amazing opportunities get dropped. If you’re working in a high-value bucket where nobody’s on the ball, tons of alpha is on the table. (Assuming enough donors or grantmakers will listen to you to fund your best stuff.)
(I talk about “10x opportunities” and “1x opportunities” for simplicity here. It might be better to focus on goodness. Like: our bar is one unit of value per dollar. This opportunity is exciting because it will generate 10M units for $1M; it creates a surplus of 9M units. In “10x” mindset, it’s twice as good to spend $2M at 10x as to spend $1M at 10x. That’s true but that framing can mislead you into thinking like the goal is spending money rather than the goal is generating goodness, or into insufficiently appreciating diminishing returns.)
(Money is not a monolith. Some kinds of money/donors are much better than others, per dollar. For example, often your marginal opportunities for flexible/savvy donors are better than those for donors who are not open to weird stuff or have random constraints. And tax considerations make money for nonprofits cheaper than other kinds of money. You should have different bars for different kinds of money/donors.)
Adverse selection is extremely important.
Mostly this is winner’s curse and a related phenomenon.
Winner’s curse is: if you think X is better than your peers do, that’s evidence that X is not as good as you think. Could opportunities like X get funded without you? If so, then the worlds where you’re counterfactual for funding X are just the worlds where nobody else wanted to fund X. Insofar as the others might have information that you don’t, this is a negative update on X.
Fortunately there’s often a good solution to this problem: just check with the grantmakers/experts who might have new info/takes. But sometimes you won’t be able to fully understand their view, because parts are secret or it’s not worth the time to deeply share models.
A related phenomenon is: if X is selected for looking great, X tends to be pretty good but also tends to be worse than it looks — you’re probably overestimating how good it is. The noisier your evaluations are, the worse this gets. This phenomenon is explored in How much do you believe your results?.
You should account for this by discounting great-looking but high-uncertainty prospects somewhat. On the other hand, if there’s no downside risk, uncertainty also has upside: lots of value comes from worlds where the opportunity is better than you think; the EV is greater than your median on what you’d say EV is if you investigated more.
Also many sources of information are filtered, and sometimes people will try to mislead you in order to get money.
You should be somewhat muggable or you’ll miss some great opportunities. But the downside of being muggable is not just sometimes wasting money but also incentivizing people to try to exploit you. Prefer to be mugged by the world than by a potentially-adversarial agent. Be willing to sacrifice a little value to be less exploitable. For example, avoid incentivizing people to wait to share opportunities with you until they’re urgent.
Prioritize between buckets.
Prioritization between buckets is more important than prioritization within buckets. The typical intervention in a great bucket is >>10x as good as the typical intervention in a mediocre bucket. This is not priced in; the best buckets are not as popular as they should be.
Information value
Sometimes information is very good.
E.g.: how good various desiderata are, how effective various interventions are for promoting desiderata, which unknown/uninvestigated opportunities are great, and what the opportunities will be like in the future and how to prepare. Grantmakers are largely prioritization researchers, and some parameters in your prioritization-model are crucial but unstable.
If you’ll have a high-uncertainty opportunity to spend $10M in a year, and you can spend $1M now to resolve a lot of uncertainty, that might be great.
Obviously prioritizing well is crucial. The great opportunities are many times better than the mediocre opportunities, even on the margin. Almost all of my donation-savvy friends regret their past donations (until recently); if they’re well-informed about great donation opportunities now but weren’t in the past, their donations now are many times better. If you’re pretty uninformed and you’ll get more information in the future, the value of waiting for information is generally greater than the value of donating sooner. (But sometimes spending money is a great way for the whole ecosystem to get more information.)
Optionality is very good, if you’ll have more information in the future.
Steering projects
Sometimes steering projects is important. You are not limited to deciding whether to fund a project. If you have good views on what a project should do, sometimes you should get the project to follow those views. You can make it a condition of the grant, you can just make your views clear in your grantmaker capacity (projects try to make their funders happy), or you can just share takes as an expert on what projects in this domain would be great and miscellaneous considerations in this domain.
But obviously when you’re wrong you’ll destroy a bunch of value. And you’ll destroy value when people defer to you more than you want, especially if they might misunderstand your views.
And obviously it’s costly if steering a project requires lots of work — your job should probably mostly be finding/creating amazing projects, not steering various good projects.
Steering power is limited. Fear theories of change that route through “empower this sketchy person and hope they do good things.”
Counterfactuality & funging
It’s important to understand counterfactuality and funging, especially if there are other grantmakers/donors in the space and you’re not fully aligned with them. But the naive consequentialist upshot—that you should try to be a donor-of-last-resort so that you never fund something if someone else would instead—is generally uncooperative and bad. I don’t know how grantmakers/donors should coordinate on sharing costs; it’s messy. Fortunately often it’s clear who’s responsible for funding something, e.g. because different actors have different niches.
Matching pledges are usually deceptive, but matching can be a fine way for grantmakers/donors to coordinate on sharing costs.
More
In some domains, the bottleneck is grantmaking/evaluation capacity and we’re in triage. In these cases, if you only recommend donation opportunities after seriously investigating, you’ll miss some great opportunities. It’s scary to make bets that might not just fail but also turn out to be predictably bad, but sometimes it’s the right thing to do. Unless there’s downside risk; if there might be large downside (beyond wasting money), you should be careful.
(This may not be the case. It depends on your focus area.)
I am not saying recommend more stuff even if it’s mediocre. I am saying maximize EV. Sometimes you don’t have time to carefully investigate X and so you have to decide between making X happen based on little investigation and X not happening. When deciding “I won’t make X happen,” be sad/scared about the badness of X not-happening in worlds where X is great, not just the goodness of X not-happening in worlds where X is not-great. If there’s no downside risk beyond wasting money, then a grant’s cost is limited but upside is unlimited.
If we’ll have more information in the future, that can be strong reason to hold off on making decisions.
This assumes you are decently competent, decent at doing 80⁄20 investigation (or quickly checking with others who are well-positioned to advise), and understand adverse selection and avoid making yourself exploitable.
It’s important to understand the grantmakers/donors relevant to your focus areas — for the above reason, for mitigating adverse selection, and because they have relevant expertise.
Having a personal $300K donation budget is substantially better than having a (savvy, aligned, flexible, high-bandwidth) $300K donor. Sometimes speed is crucial. Sometimes a project needs a commitment to move forward, but you don’t need to send money immediately, so you quickly make a pledge but can often find another donor to fill it. (Controlling a fund might also suffice.) Sometimes you really don’t want to have to write up a doc for a donor, then have a call with the donor, then wait on that donor and find another donor if that donor’s not into it, before you can make a commitment.
If something will require lots of input from you, treat that as a big cost. If something will require you to engage a bunch with lawyers/consultants/etc., treat that as a big cost.
Do the reading. Try to get context on everything and understand everything, until how you should specialize is clear.
Some stuff is 100x as important as other stuff. Think about which things you do are important, and then do more of that and try to stop doing everything else.
People generally overestimate effect sizes and are overconfident.
Feedback loops seem great. Idk, I don’t have good feedback loops. Also you just get better with practice.
Notes
Note: I subscribe to BOTEC maximalism: I put numbers on things whenever possible and those numbers are pretty load-bearing. As far as I know, nobody outside my team does that. I think most people are correct not to do it. It works great for us, especially for comparing interventions that target different desiderata, e.g. “make the US government better on AI safety” vs “make technical AI safety research happen.” But it only works because we’re good at quantifying the value (for the long-term future) of many (AI safety, better futures, politics, etc.) desiderata and interventions (and we can share state and resolve disagreements — it would be worse for large teams). For most people—even many math-y people—their BOTECs are often terrible, much worse than mere intuition. Sometimes it’s crucial to assess value in abstract units, especially for comparing different kinds of interventions. But it mostly seems fine if you’re like “here are some different things that are similarly good (and how they compare to our bar)” and then just compare new stuff to those things.
Note: many of these takes are a priori observations. You shouldn’t update as if these are all based on real-world experience.
Grantmaking reading recommendations
The best thing is Linch’s Some unfun lessons I learned as a junior grantmaker (which loosely inspired this post’s title). After that, consider (these all happen to be from CG):
“Grant investigation plan template from Open Philanthropy” (private doc; I got this from Max Daniel)
If you have reading recommendations, please share! I asked various grantmakers and they didn’t really have others.
This post is the beginning of my sequence inspired by my prioritization research and donation advising work.
- ^
You counterfactually generated $9M of value. The people/orgs that actually do the project, if relevant, are also counterfactual for that value, but that’s fine; counterfactuals don’t sum to the total. The donor generated $1M of value. I assume your 10x judgment is after accounting for the opportunity cost of people/orgs, if relevant — the value you generate is the value of the project minus the opportunity cost of the people/orgs and the money required.
This is really great, thanks for writing this up.
Thanks for writing this, Zach. After spending the last 2.5 years working as a grantmaker, a lot of this resonates with me!
Rather than flag the specific bits I agree with, I’ll just say: this seems to me like a pretty useful piece for anyone trying to understand the mental models many AI safety grantmakers tend to use in practice.
Curated. “How do we actually donate money to do good in the world” is still just a very important topic.
This seems relevant both to professional grantmakers, and to people (of which I’m seeing more of lately) who end up in some kind of “temporary or pseudo-grantmaker” position – from participating in a round of an evaluation process like SFF, or being a regranter, or simply ending with enough money from equity that it’s worth starting to think like a grantmaker.
A lot of the ideas in this post are ones I’ve seen discussed in in-person conversations but not really written up in a legible way.
This is fantastic, tons of things I agree with strongly.
That said, my big undressed question is about scale; obviously it’s easier to fund one $1m project than 5 $200k projects, but the smaller projects are often higher leverage. And that goes for smaller things too.
So taking this much further, in my experience lots of really great early stage opportunities are $5k or $10k grants (help someone write a paper, or fund a small experiment to check if a new idea works,) which can have as much expected impact as a marginal $200k on different opportunities; how do you manage these, both in terms of filtering and finding them, and managing the relatively very high overhead costs for them? (Or do you not find that this is true, or do you have a minimum?)
Good point; I agree small opportunities can be great.
This post is more like I have a priori observations than I know what processes work well in practice. I don’t claim the latter. But since you asked:
I don’t do a good job of finding small opportunities. When small opportunities come to my attention, my process is something like:
(If it’s out-of-scope of my expertise, drop it, unless an advisor is strongly vouching for it or it seems truly amazing or something.)
Do I have a great sense of how good it is, in particular because it’s just a small version of something I’ve investigated in the past? If so, use that.
Otherwise, is there someone else who should decide? If so, get them to decide.
Otherwise, are there positive second-order effects to actually investigating (e.g. maybe noticing or being able to evaluate many more opportunities like this)? If so, do that.
Otherwise, try to make a decision quickly. If downside risk is low and upside is maybe-great, then make the grant.
Er, that’s conflating “small grant” with “low-stakes.” Sometimes the amount of money is small but the opportunity is high-stakes — sometimes the upside is high; sometimes there are costs or downside risks much greater than the cost of the money. It’s the low-stakes opportunities that you want to decide quickly on.
An abbreviated heuristic is: if it’s in-scope and it seems great and it’s hard to imagine regretting it substantially more than if you lit the money on fire, just fund all such small opportunities. Funding lots of small opportunities is better than funding few.
Note that being exploitable has downsides beyond wasting money. (Internet people reading this, please don’t ask me for money because you read this; I’m very unlikely to give you money even for good things because my expertise is limited to a small fraction of good things.)
Probably in my domain relative to yours, (1) there’s way fewer small one-off opportunities and (2) a greater fraction of them have substantial downside risk.
I really liked this, thank you for writing it.
What I didn’t expect about being a funder by James Ozden came to mind.
On the BOTEC maximalism and your bar, can you say more? I guess I’ve been a bit cluster-pilled, especially in practice given how bad the thinking I’ve seen is in many BOTECs, so if anyone else said this I’d be skeptical, but I respect your thinking and I thought Eric’s CEA of donating $1k to Alex Bores was good, so I’m intrigued.
Stay tuned for the rest of the sequence!
“Money is not a monolith.” is one of the best truisms I’ve seen in a while. I’m going to reuse that. Thanks!