The funding conversation we left unfinished
People working in the AI industry are making stupid amounts of money, and word on the street is that Anthropic is going to have some sort of liquidity event soon (for example possibly IPOing sometime next year). A lot of people working in AI are familiar with EA, and are intending to direct donations our way (if they haven’t started already). People are starting to discuss what this might mean for their own personal donations and for the ecosystem, and this is encouraging to see.
It also has me thinking about 2022. Immediately before the FTX collapse, we were just starting to reckon, as a community, with the pretty significant vibe shift in EA that came from having a lot more money to throw around.
CitizenTen, in “The Vultures Are Circling” (April 2022), puts it this way:
The message is out. There’s easy money to be had. And the vultures are coming. On many internet circles, there’s been a worrying tone. “You should apply for [insert EA grant], all I had to do was pretend to care about x, and I got $$!” Or, “I’m not even an EA, but I can pretend, as getting a 10k grant is a good instrumental goal towards [insert-poor-life-goals-here]” Or, “Did you hear that a 16 year old got x amount of money? That’s ridiculous! I thought EA’s were supposed to be effective!” Or, “All you have to do is mouth the words community building and you get thrown bags of money.”
Basically, the sharp increase in rewards has led the number of people who are optimizing for the wrong thing to go up. Hello Goodhart. Instead of the intrinsically motivated EA, we’re beginning to get the resume padders, the career optimizers, and the type of person that cheats on the entry test for preschool in the hopes of getting their child into a better college. I’ve already heard of discord servers springing up centered around gaming the admission process for grants. And it’s not without reason. The Atlas Fellowship is offering a 50k, no strings attached scholarship. If you want people to throw out any hesitation around cheating the system, having a carrot that’s larger than most adult’s yearly income will do that.
Other highly upvoted posts from that era:
I feel anxious that there is all this money around. Let’s talk about it—Nathan Young, March 2022
Free-spending EA might be a big problem for optics and epistemics—George Rosenfield, April 2022
EA and the current funding situation—Will MacAskill, May 2022
The biggest risk of free-spending EA is not optics or motivated cognition, but grift—Ben Kuhn, May 2022
Bad Omens in Current Community Building—Theo Hawking, May 2022
The EA movement’s values are drifting. You’re allowed to stay put. - Marisa, May 2022
I wish FTX hadn’t done fraud and collapsed for many reasons, but one feels especially salient currently: we never finished processing how abundant funding impacts a high-trust altruistic community. The conversation had barely started.
I would say that I’m worried about these dynamics emerging again, but there’s something a little more complicated here. Ozy actually calls out a similar strand of dysfunction in (parts of) EA in early 2024:
Effective altruist culture ought to be about spending resources in the most efficient way possible to do good. Sure, sometimes the most efficient way to spend resources to do good doesn’t look frugal. I’ve long advocated for effective altruist charities paying their workers well more than average for nonprofits. And a wise investor might make 99 bets that don’t pay off to get one that pays big. But effective altruist culture should have a laser focus on getting the most we can out of every single dollar, because dollars are denominated in lives.
...
It’s cool and high-status to travel the world. It’s cool and high-status to go on adventures. It’s cool and high-status to spend time with famous and influential people. And, God help us, it’s cool and high-status to save the world.I think something like this is the root of a lot of discomfort with showy effective altruist spending. It’s not that yachting is expensive. It’s that if your idea of what effective altruists should be doing is yachting, a reasonable person might worry that you’ve lost the plot.
So these dynamics are not “emerging again”. They haven’t left. And I’m worried that they might get turbocharged when money comes knocking again.
A basic issue with a lot of deliberate philanthropy is the tension between:
In many domains, much of the biggest gains are likely to come from marginal opportunities. E.g. because they have more value of information, more large upsides, more addressing neglected areas (and therefore plausibly strategically important.
Marginal opportunities are harder to evaluate.
There’s less preexisting understanding, on the part of fund allocators.
The people applying would tend to be less tested.
Therefore, it’s easier to game.
The kneejerk solution I’d propose is “proof of novel work”. If you want funding to do X, you should show that you’ve done something to address X that others haven’t done. That could be a detailed insightful write-up (which indicates serious thinking / fact-finding); that could be some you did on the side, which isn’t necessarily conceptually novel but is useful work on X that others were not doing; etc.
I assume that this is an obvious / not new idea, so I’m curious where it doesn’t work. Also curious what else has been tried. (E.g. many organizations do “don’t apply, we only give to {our friends, people we find through our own searches, people who are already getting funding, …}”.)
So let me jump in and say, I’ve been on Less Wrong since it started, and engaged with topics like transhumanism, saving the world, and the nature of reality, since before 2000; and to the best of my recollection, I have never received any serious EA or rationalist or other type of funding, despite occasionally appealing for it. So for anyone worried about being corrupted by money: if I can avoid it so comprehensively, you can do it too! (The most important qualities required for this outcome may be a sense of urgency and a sense of what’s important.)
Slightly more seriously, if there is anyone out there who cares about topics like fundamental ontology, superalignment, and theoretical or meta-theoretical progress in a context of short timelines, and who wishes to fund it, or who has ideas about how it might be funded, I’m all ears. By now I’m used to having zero support of that kind, and certainly I’m not alone out here, but I do suspect there are substantial lost opportunities involved in the way things have turned out.
This is crossposted from the EA Forums because I expect similar (but weaker) dynamics to impact the rationality community.