Considerations on Compensation

This post was prompted by the recent announcement of Lightcone Infrastructure, and particularly Elizabeth’s response in a comment thread on compensation. I’d like to thank Max Wallace, Ruby, and Miranda Dixon-Luinenburg for their comments and copyediting.

Context (& disclaimer): I’m a software engineer and donate ~exclusively to longterm-focused orgs (currently MIRI). While most of the arguments are intended to be broadly applicable, this post was written with that specific framing in mind, and one argument does explicitly depend on the question of AGI timelines.

You’re an EA-aligned organization that isn’t funding constrained. Why should you (or shouldn’t you) pay market rate?

Talent Pool

The econ 101 view says that if you want more of something, one easy way to get it is to pay more money. But where exactly would we be getting “more” from?

The already-convinced

There is a group of people who already know about your mission and agree with it, but aren’t doing direct work. This group consists of:

  • Those earning to give

  • Those who are earning enough to give but don’t (for a variety of reasons)

  • Those who don’t have the capacity do either direct work or earning to give

On the margin, who in this group could be convinced to do direct work?

Those earning to give

There are many reasons why someone might be earning to give instead of doing direct work, assuming they have the skillset necessary to do the second (or can acquire it). Here is a list of plausible reasons that could be overcome with more money.

  • Information - they may simply not know that their favorite EA org pays (close to) market rates! This is an area where there may be some “free wins” obtainable with just better PR—in that sense I think Lightcone’s announcement and public salary listing is a great start.

  • Maintaining their present lifestyle—this one is pretty sensitive to the actual numbers involved, but it needs to be said that in the Bay Area there are things that are difficult to afford on typical non-profit salaries, like “owning a house within reasonable commuting distance of the office” and “having kids”. These can be concerns even for e.g. highly-paid software engineers, if they’ve already done one or the other (or both!) and thus locked themselves in to those costs.

  • Maintaining optionality—if you take a pay cut to do direct work and circumstances change in the future such that it no longer makes sense for you to keep doing whatever you’re doing, you have less money and thus fewer options to deal with whatever problems may arise. This doesn’t matter if you’re operating on short timelines and are extremely confident in them. Kinda sucks if you’re only mostly sure, though.

  • Hedging against the loss of career capital—technically this belongs under “Maintaining optionality” but it’s a fairly specific point that I don’t think I’ve seen raised elsewhere so I wanted to emphasize it. At least in software, most work that you’d do at an EA org will not grow your career capital, compared to the counterfactual work you’d be doing at a better-paying tech company. (The major exceptions to this are roles at research orgs which are doing large-scale ML R&D, which seem extremely similar to such roles at tech companies and look just as good if not better on your resume. Think OpenAI, Anthropic, etc.) If you jump in the pool, you’ll likely have a harder time getting out, later, and this gets worse the longer you stay in. For a motivated and skilled software engineer, this could easily dwarf the explicit pay cut when it comes to calculating foregone future earnings.

  • Social expectations—this is mostly just here for completeness’ sake. I don’t expect there are a huge number of people who would go do direct work except that their family/​friends/​social circle would look down on them for taking a pay cut. It’s a big world out there, but I’d expect subtler status concerns to dominate and those are harder to move just by throwing money at them (though see the point about PR under “Information”).

  • Preference for industry work—some people might expect that they’d enjoy direct work less. Maybe they’d be happy to either take a pay cut, or sacrifice some of the intrinsic pleasure of the job, but not both?

Those who are earning enough to give but don’t

If we want to be precise, there’s obviously a spectrum—many people who are earning to give could probably give more without sacrificing anything, except optionality. This group is all the way on one side of the spectrum. When I was in this group the loss of optionality seemed to be the dominant factor informing that decision. I don’t have great insight into what else could be driving that for other people, though, beyond those factors also listed in the previous category.

Those who don’t have the capacity do either direct work or earning to give

If the lack of capacity is a permanent condition, more money obviously won’t do anything, so we can ignore that. If it’s a temporary condition, like “still a university student”, then there are probably some cases where “more money” could be motivating in the right direction (focusing on things more applicable to direct work, graduating faster, dropping out, etc.). I think the effect here is pretty marginal, though.

The skeptical

I don’t think more money moves the needle for anyone who’s familiar with a cause area but isn’t convinced it’s worth working on or directing resources to.

The not-yet-aware

Most (extremely online) software engineers have never heard of EA, but they have heard of FAANG[1]! I think it’s extremely likely that the number of software engineers who would be mission-aligned if they but knew about the mission is substantial and each additional engineer that first hears about [insert EA org here] on an online tech forum because “[insert EA org here] is competitive with FAANG, isn’t that crazy??” is another opportunity to introduce someone to the community. PR concerns deserve their own section but are not entirely negative. If we are substantially bottlenecked on talent (particularly experienced talent), then increasing the top of the funnel is an urgent priority and could be much more effective than trying to pull from the relatively small set of people who are already aligned on mission, have the necessary skillset & talent, and aren’t otherwise constrained by exogenous factors.

One potential downside here is that the pipeline could fill up with technically qualified but unaligned (or worse, pretending-to-be-aligned) candidates. There are ways to manage this but it does deserve some thought.

PR

One obvious concern with paying top-of-market salaries is the PR risk. “Local non-profit pays its engineers half a million dollars a year!” certainly looks like a bad news headline, but I think this is not actually a huge concern.

First, “Local non-profit pays its engineers 300k per year!” does not have a meaningfully different effect on the impression the median headline-reader comes away with.

Second, my subjective impression (which could stand to be validated empirically) is that most sources of funding for orgs focused on long term causes and meta work won’t be offended by paying market-rate compensation for talent. Quite possibly they would have the opposite reaction! There is a real risk here that it could turn away some potential future donors (or even employees) on the margin, particularly those who aren’t yet familiar with EA and the associated goals and principles behind it. If you strongly think we live in a universe with a short timeline (<30 years) to AGI, this is not a significant factor. If not, this deserves consideration.

Other Considerations

Let’s imagine you’re spinning up a new EA organization and you have more money than you know what to do with. You have so much money that you’ve started paying people $500 to write book reviews! (I kid, I kid. $500 book reviews are cheap, as far as a hits-based approach goes. But they also aren’t something that MIRI/​LW/​Lightcone had money for 5 years ago.)

How do you decide what you’re going to pay people? I’m not totally sure what strategy I would try first, but I don’t think it’s “try to capture as much surplus ‘value’ as I can from the relationship”. I wouldn’t want to optimize for being just barely the best option my desired candidates have available to them—I’m in a rush! I’m under a time crunch! I need as many qualified candidates as possible beating down the doors to work with me. I want to be solving the problems of “how do I filter for the best of the best” and “how do I grow an organization as fast as possible while remaining strongly mission-aligned”, not “where on earth am I going to find motivated, competent, and aligned engineers to work early-stage-startup-style overtime on what is effectively line-of-business software”!

That’s just a guess as to what I’d come up with if I was trying to write a comp policy from first-principles, anyways. I’m not an HR professional and there are certainly considerations I could be missing, but even accounting for things you can’t put in writing I don’t think the case for paying below-market rates is terribly motivating.


  1. ↩︎

    FAANG stands for “Facebook, Amazon, Apple, Netflix, and Google”. Originally the acronym was FANG, and was used to refer to a group of high-performing tech stocks, but was picked up by software engineers online to talk about tech companies that were known to pay notoriously well at the time.