Another Critique of Effective Altruism

Cross-posted from my blog. It is al­most cer­tainly a bad idea to let this post be your first ex­po­sure to the effec­tive al­tru­ist move­ment. You should at the very least read these two posts first.


Re­cently Ben Kuhn wrote a cri­tique of effec­tive al­tru­ism. I’m glad to see such self-ex­am­i­na­tion tak­ing place, but I’m also con­cerned that the es­say did not at­tack some of the most se­ri­ous is­sues I see in the effec­tive al­tru­ist move­ment, so I’ve de­cided to write my own cri­tique. Due to time con­straints, this cri­tique is short and in­com­plete. I’ve tried to bring up ar­gu­ments that would make peo­ple feel un­com­fortable and defen­sive; hope­fully I’ve suc­ceeded.

Briefly, here are some of the ma­jor is­sues I have with the effec­tive al­tru­ism move­ment as it cur­rently stands:

  • Over-fo­cus on “tried and true” and “de­fault” op­tions, which may both re­duce ac­tual im­pact and de­crease ex­plo­ra­tion of new po­ten­tially high-value op­por­tu­ni­ties.

  • Over-con­fi­dent claims cou­pled with in­suffi­cient back­ground re­search.

  • Over-re­li­ance on a small set of tools for as­sess­ing op­por­tu­ni­ties, which lead many to un­der­es­ti­mate the value of things such as “flow-through” effects.

The com­mon theme here is a sub­tle un­der­ly­ing mes­sage that sim­ple, shal­low analy­ses can al­low one to make high-im­pact ca­reer and giv­ing choices, and di­vest one of the need to dig fur­ther. I doubt that any­one ex­plic­itly be­lieves this, but I do be­lieve that this theme comes out im­plic­itly both in ar­gu­ments peo­ple make and in ac­tions peo­ple take.

Lest this es­say give a mis­taken im­pres­sion to the ca­sual reader, I should note that there are many ex­am­plary effec­tive al­tru­ists who I feel are mostly im­mune to the is­sues above; for in­stance, the GiveWell blog does a very good job of warn­ing against the first and third points above, and I would recom­mend any­one who isn’t already to sub­scribe to it (and there are other ex­am­ples that I’m failing to men­tion). But for the pur­poses of this es­say, I will ig­nore this fact ex­cept for the cur­rent caveat.

Over-fo­cus on “tried and true” options


It seems to me that the effec­tive al­tru­ist move­ment over-fo­cuses on “tried and true” op­tions, both in giv­ing op­por­tu­ni­ties and in ca­reer paths. Per­haps the biggest ex­am­ple of this is the prevalence of “earn­ing to give”. While this is cer­tainly an ad­mirable op­tion, it should be con­sid­ered as a baseline to im­prove upon, not a defini­tive an­swer.

The biggest is­sue with the “earn­ing to give” path is that ca­reers in fi­nance and soft­ware (the two most com­mon av­enues for this) are in­cred­ibly straight-for­ward and se­cure. The two things that fi­nance and soft­ware have in com­mon is that there is a well-defined ap­pli­ca­tion pro­cess similar to the one for un­der­grad­u­ate ad­mis­sions, and given rea­son­able job perfor­mance one will con­tinue to be given pro­mo­tions and raises (this prob­a­bly en­tails work­ing hard, but the end re­sult is still rarely in doubt). One also gets a con­stant source of ex­trin­sic pos­i­tive re­in­force­ment from the money they earn. Why do I call these things an “is­sue”? Be­cause I think that these at­tributes en­courage peo­ple to pur­sue these paths with­out look­ing for less ob­vi­ous, less cer­tain, but ul­ti­mately bet­ter paths. One in six Yale grad­u­ates go into fi­nance and con­sult­ing, seem­ingly due to the sim­plic­ity of ap­ply­ing and the easy sup­ply of ex­trin­sic mo­ti­va­tion. My in­tu­ition is that this ra­tio is higher than an op­ti­mal so­ciety would have, even if such peo­ple com­monly gave gen­er­ously (and it is cer­tainly much higher than the num­ber of peo­ple who en­ter col­lege plan­ning to pur­sue such paths).

Con­trast this with, for in­stance, work­ing at a start-up. Most start-ups are low-im­pact, but it is un­de­ni­able that at least some have been ex­traor­di­nar­ily high-im­pact, so this seems like an area that effec­tive al­tru­ists should be con­sid­er­ing strongly. Why aren’t there more of us at 23&me, or Coursera, or Quora, or Stripe? I think it is be­cause these op­por­tu­ni­ties are less ob­vi­ous and take more work to find, once you start work­ing it of­ten isn’t clear whether what you’re do­ing will have a pos­i­tive im­pact or not, and your fu­ture job se­cu­rity is mas­sively un­cer­tain. There are few sources of ex­trin­sic mo­ti­va­tion in such a ca­reer: per­haps moreso at one of the com­pa­nies men­tioned above, which are rea­son­ably es­tab­lished and have cus­tomers, but what about the 4-per­son start-up teams work­ing in a ware­house some­where? Some of them will go on to do great things but right now their lives must be full of anx­ious­ness and un­cer­tainty.

I don’t mean to fetishize start-ups. They are just one well-known ex­am­ple of a po­ten­tially high-value ca­reer path that, to me, seems un­der­ex­plored within the EA move­ment. I would ar­gue (per­haps self-serv­ingly) that academia is an­other ex­am­ple of such a path, with similar psy­cholog­i­cal ob­sta­cles: ev­ery 5 years or so you have the op­por­tu­nity to get kicked out (e.g. ap­ply­ing for fac­ulty jobs, and be­ing up for tenure), you need to re­lo­cate reg­u­larly, few peo­ple will read your work and even fewer will praise it, and it won’t be clear whether it had a pos­i­tive im­pact un­til many years down the road. And be­yond the “ob­vi­ous” al­ter­na­tives of start-ups and academia, what of the paths that haven’t been cre­ated yet? GiveWell was rev­olu­tion­ary when it came about. Who will be the next GiveWell? And by this I don’t mean the next char­ity eval­u­a­tor, but the next set of peo­ple who fun­da­men­tally al­ter how we view al­tru­ism.

Over-con­fi­dent claims cou­pled with in­suffi­cient back­ground research


The his­tory of effec­tive al­tru­ism is lit­tered with over-con­fi­dent claims, many of which have later turned out to be false. In 2009, Peter Singer claimed that you could save a life for $200 (and many oth­ers re­peated his claim). While the num­ber was already ques­tion­able at the time, by 2011 we dis­cov­ered that the num­ber was com­pletely off. Now new num­bers were thrown around: from num­bers still in the hun­dreds of dol­lars (GWWC’s es­ti­mate for SCI, which was later shown to be flawed) up to $1600 (GiveWell’s es­ti­mate for AMF, which GiveWell it­self ex­pected to go up, and which in­deed did go up). Th­ese num­bers were of­ten cited with­out caveats, as well as other claims such as that the effec­tive­ness of char­i­ties can vary by a fac­tor of 1,000. How many peo­ple cit­ing these num­bers un­der­stood the pro­cess that gen­er­ated them, or the high de­gree of un­cer­tainty sur­round­ing them, or the in­ac­cu­racy of past es­ti­mates? How many would have pointed out that say­ing that char­i­ties vary by a fac­tor of 1,000 in effec­tive­ness is by it­self not very helpful, and is more a state­ment about how bad the bot­tom end is than how good the top end is?

More prob­le­matic than the care­less bandy­ing of num­bers is the ten­dency to­ward not do­ing strong back­ground re­search. A com­mon pat­tern I see is: an effec­tive al­tru­ist makes a bold claim, then when pressed on it offers a heuris­tic jus­tifi­ca­tion to­gether with the claim that “es­ti­ma­tion is the best we have”. This sort of ar­gu­ment acts as a con­ver­sa­tion-stop­per (and can also be quite an­noy­ing, which may be part of what drives some peo­ple away from effec­tive al­tru­ism). In many of these cases, there are rel­a­tively easy op­por­tu­ni­ties to do back­ground read­ing to fur­ther ed­u­cate one­self about the claim be­ing made. It can ap­pear to an out­side ob­server as though peo­ple are opt­ing for the fun, easy ac­tivity (spec­u­la­tion) rather than the harder and more worth­while ac­tivity (re­search). Again, I’m not claiming that this is peo­ple’s ex­plicit thought pro­cess, but it does seem to be what ends up hap­pen­ing.

Why haven’t more EAs signed up for a course on global se­cu­rity, or tried to un­der­stand how DARPA funds pro­jects, or learned about third-world health? I’ve heard claims that this would be too time-con­sum­ing rel­a­tive to the value it pro­vides, but this seems like a poor ex­cuse if we want to be taken se­ri­ously as a move­ment (or even just want to reach con­sis­tently ac­cu­rate con­clu­sions about the world).

Over-re­li­ance on a small set of tools


Effec­tive al­tru­ists tend to have a lot of in­ter­est in quan­ti­ta­tive es­ti­mates. We want to know what the best thing to do is, and we want a nu­mer­i­cal value. This causes us to rely on sci­en­tific stud­ies, eco­nomic re­ports, and Fermi es­ti­mates. It can cause us to un­der­weight things like the com­pe­tence of a par­tic­u­lar or­ga­ni­za­tion, the strength of the peo­ple in­volved, and other “in­tan­gibles” (which are of­ten not ac­tu­ally in­tan­gible but sim­ply difficult to as­sign a num­ber to). It also can cause us to over-fo­cus on money as a unit of al­tru­ism, while of­ten-times “it isn’t about the money”: it’s about do­ing the ground­work that no one is do­ing, or find­ing the op­por­tu­nity that no one has found yet.

Quan­ti­ta­tive es­ti­mates of­ten also tend to ig­nore flow-through effects: effects which are an in­di­rect, rather than di­rect, re­sult of an ac­tion (such as de­creased dis­ease in the third world con­tribut­ing in the long run to in­creased global se­cu­rity). Th­ese effects are difficult to quan­tify but hu­man and cul­tural in­tu­ition can do a rea­son­able job of tak­ing them into ac­count. As such, I of­ten worry that effec­tive al­tru­ists may ac­tu­ally be less effec­tive than “nor­mal” al­tru­ists. (One can point to all sorts of ex­am­ples of far­ci­cal char­i­ties to claim that reg­u­lar al­tru­ism sucks, but this misses the point that there are also amaz­ing or­ga­ni­za­tions out there, such as the Si­mons Foun­da­tion or HHMI, which are do­ing enor­mous amounts of good de­spite not sub­scribing to the EA philos­o­phy.)

What’s par­tic­u­larly wor­ri­some is that even if we were less effec­tive than nor­mal al­tru­ists, we would prob­a­bly still end up look­ing bet­ter by our own stan­dards, which ex­plic­itly fail to ac­count for the ways in which nor­mal al­tru­ists might out­perform us (see above). This is a prob­lem with any paradigm, but the fact that the effec­tive al­tru­ist com­mu­nity is small and in­su­lar and re­lies heav­ily on its paradigm makes us far more sus­cep­ti­ble to it.