Some things I noticed while LARPing as a grantmaker

Written to a new grantmaker.

  • Most value comes from finding/​creating projects many times your bar, rather than discriminating between opportunities around your bar. If you find/​create a new opportunity to donate $1M at 10x your bar (and cause it to get $1M, which would otherwise be donated to a 1x thing), you generate $9M of value (at your bar).[1] If you cause a $1M at 1.5x opportunity to get funded or a $1M at 0.5x opportunity to not get funded, you generate $500K of value. The former is 18 times as good.

    • You should probably be like I do research to figure out what projects should exist, then make them exist rather than I evaluate the applications that come to me. That said, most great ideas come from your network, not from your personal brainstorming.

    • In some buckets, the low-hanging fruit will be plucked. In others, nobody’s on the ball and amazing opportunities get dropped. If you’re working in a high-value bucket where nobody’s on the ball, tons of alpha is on the table. (Assuming enough donors or grantmakers will listen to you to fund your best stuff.)

    • I talk about “10x opportunities” and “1x opportunities” for simplicity here. It might be better to focus on goodness. Like: our bar is one unit of value per dollar. An opportunity to generate 10M units for $1M is exciting: it creates a surplus of 9M units. In “10x” mindset, it’s twice as good to spend $2M at 10x as to spend $1M at 10x. That’s true but that framing can mislead you into thinking like the goal is spending money rather than the goal is generating goodness.

    • (Money is not a monolith. Some kinds of money/​donors are much better than others, per dollar. For example, often your marginal opportunities for flexible/​savvy donors are better than those for donors who are not open to weird stuff or have random constraints. And tax considerations make money for nonprofits cheaper than other kinds of money. You should have different bars for different kinds of money/​donors.)

  • Adverse selection is extremely important.

    • Mostly this is the winner’s curse phenomenon. Could opportunities like X get funded without you? If so, then the worlds where you’re counterfactual for funding X are just the worlds where nobody else wanted to fund X. Insofar as the others might have information that you don’t, this is a negative update X.

      • Fortunately there’s often a good solution to the winner’s curse: just check with the grantmakers/​experts who might have new info/​takes. But sometimes you won’t be able to fully understand their view, because parts are secret or it’s not worth the time to deeply share models.

    • Also many sources of information are filtered, and sometimes people will try to mislead you in order to get money.

      • You should be somewhat muggable or you’ll miss some great opportunities. But the downside of being muggable is not just sometimes wasting money but also incentivizing people to try to exploit you. Prefer to be mugged by the world than by a potentially-adversarial agent. Be willing to sacrifice a little value to be less exploitable. For example, avoid incentivizing people to wait to share opportunities with you until they’re urgent.

    • Fear theories of change that route through “empower this sketchy person and hope they do good things.”

  • Sometimes information is very good.

    • E.g.: how good various desiderata are, how effective various interventions are for promoting desiderata, which unknown/​uninvestigated opportunities are great, and what the opportunities will be like in the future and how to prepare. Grantmakers are largely prioritization researchers, and some parameters in your prioritization-model are crucial but unstable.

    • If you’ll have a high-uncertainty opportunity to spend $10M in a year, and you can spend $1M now to resolve a lot of uncertainty, that might be great.

    • Obviously prioritizing well is crucial. The great opportunities are many times better than the mediocre opportunities, even on the margin. Almost all of my donation-savvy friends regret their past donations (until recently); if they’re well-informed about great donation opportunities now but weren’t in the past, their donations now are many times better. If you’re pretty uninformed and you’ll get more information in the future, the value of waiting for information is generally greater than the value of donating sooner. (But sometimes spending money is a great way for the whole ecosystem to get more information.)

      • Optionality is very good, if you’ll have more information in the future.

  • Prioritization between buckets is more important than prioritization within buckets. The marginal intervention in a great bucket is >>10x as good as the marginal intervention in a mediocre bucket.

  • In some domains, the bottleneck is grantmaking/​evaluation capacity and we’re in triage. In these cases, if you only recommend donation opportunities after seriously investigating, you’ll miss some great opportunities. It’s scary to make bets that might not just fail but also turn out to be predictably bad, but sometimes it’s the right thing to do. Unless there’s downside risk; if there might be large downside (beyond wasting money), you should be careful.

    • (This may not be the case. It depends on your focus area.)

    • I am not saying recommend more stuff even if it’s mediocre. I am saying maximize EV. Sometimes you don’t have time to carefully investigate X and so you have to decide between making X happen based on little investigation and X not happening. When deciding “I won’t make X happen,” be sad/​scared about the badness of X not-happening in worlds where X is great, not just the goodness of X not-happening in worlds where X is not-great. If there’s no downside risk beyond wasting money, then a grant’s cost is limited but upside is unlimited.

    • If we’ll have more information in the future, that can be strong reason to hold off on making decisions.

    • This assumes you are decently competent, decent at doing 8020 investigation (or quickly checking with others who are well-positioned to advise), and understand adverse selection and avoid making yourself exploitable.

  • Sometimes steering projects is important. You are not limited to deciding whether to fund a project. If you have good views on what a project should do, sometimes you should get the project to follow those views. You can make it a condition of the grant, you can just make your views clear in your grantmaker capacity (projects try to make their funders happy), or you can just share takes as an expert on what projects in this domain would be great and miscellaneous considerations in this domain. But obviously avoid situations where people defer to you more than you want, especially if they might misunderstand your views. And obviously it’s costly if steering a project requires lots of work — your job should probably mostly be finding/​creating amazing projects, not steering various good projects. And obviously when you’re wrong you’ll destroy a bunch of value.

  • It’s important to understand counterfactuality and funging, especially if there are other grantmakers/​donors in the space and you’re not fully aligned with them. But the naive consequentialist upshot—that you should try to be a donor-of-last-resort so that you never fund something if someone else would instead—is generally uncooperative and bad. I don’t know how grantmakers/​donors should coordinate on sharing costs; it’s messy. Fortunately often it’s clear who’s responsible for funding something, e.g. because different actors have different niches.

  • It’s important to understand the grantmakers/​donors relevant to your focus areas — for the above reason, for mitigating adverse selection, and because they have relevant expertise.

  • Having a personal $300K donation budget is substantially better than having a (savvy, aligned, flexible, high-bandwidth) $300K donor. Sometimes speed is crucial. Sometimes a project needs a commitment to move forward, but you don’t need to send money immediately, so you quickly make a pledge but can often find another donor to fill it. (Controlling a fund might also suffice.) Sometimes you really don’t want to have to write up a doc for a donor, then have a call with the donor, then wait on that donor and find another donor if that donor’s not into it, before you can make a commitment.

  • If something will require lots of input from you, treat that as a big cost. If something will require you to engage a bunch with lawyers/​consultants/​etc., treat that as a big cost.

  • Do the reading. Try to get context on everything and understand everything, until how you should specialize is clear.

  • Feedback loops seem great. Idk, I don’t have good feedback loops. Also you just get better with practice.

Note: I subscribe to BOTEC maximalism: I put numbers on things whenever possible and those numbers are pretty load-bearing. As far as I know, nobody outside my team does that. I think most people are correct not to do it. It works great for us, especially for comparing interventions that target different desiderata, e.g. “make the US government better on AI safety” vs “make technical AI safety research happen.” But it only works because we’re good at quantifying the value (for the long-term future) of many (AI safety, better futures, politics, etc.) desiderata and interventions (and we can share state and resolve disagreements — it would be worse for large teams). For most people—even many math-y people—their BOTECs are often terrible, much worse than mere intuition. Sometimes it’s crucial to assess value in abstract units, especially for comparing different kinds of interventions. But it mostly seems fine if you’re like “here are some different things that are similarly good (and how they compare to our bar)” and then just compare new stuff to those things.

Note: many of these takes are a priori observations. You shouldn’t update as if these are all based on real-world experience.


Grantmaking reading recommendations

The best thing is Linch’s Some unfun lessons I learned as a junior grantmaker (which loosely inspired this post’s title). After that, consider (these all happen to be from CG):

If you have reading recommendations, please share! I asked various grantmakers and they didn’t really have others.


This post is the beginning of my sequence inspired by my prioritization research and donation advising work.

  1. ^

    You counterfactually generated $9M of value. The people/​orgs that actually do the project, if relevant, are also counterfactual for that value, but that’s fine; counterfactuals don’t sum to the total. The donor generated $1M of value. I assume your 10x judgment is after accounting for the opportunity cost of people/​orgs, if relevant — the value you generate is the value of the project minus the opportunity cost of the people/​orgs and the money required.