A critique of effective altruism

I re­cently ran across Nick Bostrom’s idea of sub­ject­ing your strongest be­liefs to a hy­po­thet­i­cal apos­tasy in which you try to muster the strongest ar­gu­ments you can against them. As you might have figured out, I be­lieve strongly in effec­tive al­tru­ism—the idea of ap­ply­ing ev­i­dence and rea­son to find­ing the best ways to im­prove the world. As such, I thought it would be pro­duc­tive to write a hy­po­thet­i­cal apos­tasy on the effec­tive al­tru­ism move­ment.

(EDIT: As per the com­ments of Vaniver, Carl Shul­man, and oth­ers, this didn’t quite come out as a hy­po­thet­i­cal apos­tasy. I origi­nally wrote it with that in mind, but de­cided that a fo­cus on more plau­si­ble, more mod­er­ate crit­i­cisms would be more pro­duc­tive.)

Contents

How to read this post

(EDIT: the fol­low­ing two para­graphs were writ­ten be­fore I soft­ened the tone of the piece. They’re less rele­vant to the more mod­er­ate ver­sion that I ac­tu­ally pub­lished.)

Hope­fully this is clear, but as a dis­claimer: this piece is writ­ten in a fairly crit­i­cal tone. This was part of an at­tempt to get “in char­ac­ter”. This tone does not in­di­cate my cur­rent men­tal state with re­gard to the effec­tive al­tru­ism move­ment. I agree, to vary­ing ex­tents, with some of the cri­tiques I pre­sent here, but I’m not about to give up on effec­tive al­tru­ism or stop co­op­er­at­ing with the EA move­ment. The apos­tasy is purely hy­po­thet­i­cal.

Also, be­cause of the na­ture of a hy­po­thet­i­cal apos­tasy, I’d guess that for effec­tive al­tru­ist read­ers, the crit­i­cal tone of this piece may be es­pe­cially likely to trig­ger defen­sive ra­tio­nal­iza­tion. Please read through with this in mind. (A good way to coun­ter­act this effect might be, for in­stance, to imag­ine that you’re not an effec­tive al­tru­ist, but your friend is, and it’s them read­ing through it: how should they up­date their be­liefs?)

(End less rele­vant para­graphs.)

Fi­nally, if you’ve never heard of effec­tive al­tru­ism be­fore, I don’t recom­mend mak­ing this piece your first im­pres­sion of it! You’re go­ing to get a very skewed view be­cause I don’t bother to men­tion all the things that are awe­some about the EA move­ment.

Abstract

Effec­tive al­tru­ism is, to my knowl­edge, the first time that a sub­stan­tially use­ful set of ethics and frame­works to an­a­lyze one’s effect on the world has gained a broad enough ap­peal to re­sem­ble a so­cial move­ment. (I’d say these prin­ci­ples are some­thing like al­tru­ism, max­i­miza­tion, egal­i­tar­i­anism, and con­se­quen­tial­ism; to­gether they im­ply many im­prove­ments over the so­cial de­fault for try­ing to do good in the world—earn­ing to give as op­posed to do­ing di­rect char­ity work, work­ing in the de­vel­op­ing world rather than lo­cally, us­ing ev­i­dence and feed­back to an­a­lyze effec­tive­ness, etc.) Un­for­tu­nately, as a move­ment effec­tive al­tru­ism is failing to use these prin­ci­ples to ac­quire cor­rect non­triv­ial be­liefs about how to im­prove the world.

By way of clar­ifi­ca­tion, con­sider a dis­tinc­tion be­tween two senses of the word “try­ing” I used above. Let’s call them “ac­tu­ally try­ing” and “pre­tend­ing to try”. Pre­tend­ing to try to im­prove the world is some­thing like re­spond­ing to so­cial pres­sure to im­prove the world by query­ing your brain for a thing which im­proves the world, tak­ing the first search re­sult and rol­ling with it. For ex­am­ple, for a while I thought that I would try to im­prove the world by de­vel­op­ing com­put­er­ized meth­ods of check­ing in­for­mally-writ­ten proofs, thus al­low­ing more scal­able teach­ing of higher math, de­moc­ra­tiz­ing ed­u­ca­tion, etc. Coin­ci­den­tally, com­puter pro­gram­ming and higher math hap­pened to be the two things that I was best at. This is pre­tend­ing to try. Ac­tu­ally try­ing is look­ing at the things that im­prove the world, figur­ing out which one max­i­mizes util­ity, and then do­ing that thing. For in­stance, I now run an effec­tive al­tru­ist stu­dent or­ga­ni­za­tion at Har­vard be­cause I re­al­ized that even though I’m a com­par­a­tively bad leader and don’t en­joy it very much, it’s still very high-im­pact if I work hard enough at it. This isn’t to say that I’m ac­tu­ally try­ing yet, but I’ve got­ten closer.

Us­ing this dis­tinc­tion be­tween pre­tend­ing and ac­tu­ally try­ing, I would sum­ma­rize a lot of effec­tive al­tru­ism as “pre­tend­ing to ac­tu­ally try”. As a so­cial group, effec­tive al­tru­ists have suc­cess­fully no­ticed the pre­tend­ing/​ac­tu­ally-try­ing dis­tinc­tion. But they seem to have stopped there, as­sum­ing that know­ing the differ­ence be­tween fake try­ing and ac­tu­ally try­ing trans­lates into abil­ity to ac­tu­ally try. Em­piri­cally, it most cer­tainly doesn’t. A lot of effec­tive al­tru­ists still end up satis­fic­ing—find­ing ac­tions that are on their face ac­cept­able un­der core EA stan­dards and then pick­ing those which seem ap­peal­ing be­cause of other es­sen­tially ran­dom fac­tors. This is more likely to con­verge on good ac­tions than what so­ciety does by de­fault, be­cause the prin­ci­ples are bet­ter than so­ciety’s de­fault prin­ci­ples. Nev­er­the­less, it fails to make much progress over what is di­rectly ob­vi­ous from the core EA prin­ci­ples. As a re­sult, al­though “do­ing effec­tive al­tru­ism” feels like truth-seek­ing, it of­ten ends up be­ing just a more cred­ible way to pre­tend to try.

Below I in­tro­duce var­i­ous ways in which effec­tive al­tru­ists have failed to go be­yond the so­cial-satis­fic­ing al­gorithm of es­tab­lish­ing some cred­ibly ac­cept­able al­ter­na­tives and then pick­ing among them based on es­sen­tially ran­dom prefer­ences. I ex­hibit other ar­eas where the norms of effec­tive al­tru­ism fail to guard against mo­ti­vated cog­ni­tion. Both of these phe­nom­ena add what I call “epistemic in­er­tia” to the effec­tive-al­tru­ist con­sen­sus: effec­tive al­tru­ists be­come more sub­ject to pres­sures on their be­liefs other than those from a truth-seek­ing pro­cess, mean­ing that the EA con­sen­sus be­comes less able to up­date on new ev­i­dence or ar­gu­ments and pre­vent­ing the move­ment from mov­ing for­ward. I ar­gue that this stems from effec­tive al­tru­ists’ re­luc­tance to think through is­sues of the form “be­ing a suc­cess­ful so­cial move­ment” rather than “cor­rectly ap­ply­ing util­i­tar­i­anism in­di­vi­d­u­ally”. This could po­ten­tially be solved by in­tro­duc­ing an ad­di­tional prin­ci­ple of effec­tive al­tru­ism—e.g. “group self-aware­ness”—but it may be too late to add new things to effec­tive al­tru­ism’s DNA.

Philo­soph­i­cal difficulties

There is cur­rently wide dis­agree­ment among effec­tive al­tru­ists on the cor­rect frame­work for pop­u­la­tion ethics. This is cru­cially im­por­tant for de­ter­min­ing the best way to im­prove the world: differ­ent pop­u­la­tion ethics can lead to dras­ti­cally differ­ent choices (or at least so we would ex­pect a pri­ori), and if the EA move­ment can’t con­verge on at least their in­stru­men­tal goals, it will quickly frag­ment and lose its power. Yet there has been lit­tle progress to­wards dis­cov­er­ing the cor­rect pop­u­la­tion ethics (or, from a moral anti-re­al­ist stand­point, con­struct­ing ar­gu­ments that will lead to con­ver­gence on a par­tic­u­lar pop­u­la­tion ethics), or even de­ter­min­ing which ethics lead to which in­ter­ven­tions be­ing bet­ter.

Poor cause choices

Many effec­tive al­tru­ists donate to GiveWell’s top char­i­ties. All three of these char­i­ties work in global health. Is that be­cause GiveWell knows that global health is the high­est-lev­er­age cause? No. It’s be­cause it was the only one with enough data to say any­thing very use­ful about. There’s lit­tle rea­son to sup­pose that this cor­re­lates with be­ing par­tic­u­larly high-lev­er­age—on the con­trary, heuris­tic but less rigor­ous ar­gu­ments for causes like ex­is­ten­tial risk pre­ven­tion, veg­e­tar­ian ad­vo­cacy and open bor­ders sug­gest that these could be even more effi­cient.

Fur­ther­more, the our cur­rent “best known in­ter­ven­tion” is likely to change (in a more cost-effec­tive di­rec­tion) in the fu­ture. There are two com­pet­ing effects here: we might dis­cover bet­ter in­ter­ven­tions to donate to than the ones we cur­rently think are best, but we also might run out of op­por­tu­ni­ties for the cur­rent best known in­ter­ven­tion, and have to switch to the sec­ond. So far we seem to be in a regime where the first effect dom­i­nates, and there’s no ev­i­dence that we’ll reach a tip­ping point very soon, es­pe­cially given how new the field of effec­tive char­ity re­search is.

Given these con­sid­er­a­tions, it’s quite sur­pris­ing that effec­tive al­tru­ists are donat­ing to global health causes now. Even for those look­ing to use their dona­tions to set an ex­am­ple, a donor-ad­vised fund would have many of the benefits and none of the down­sides. And any­way, donat­ing when you be­lieve it’s not (ex­cept for ex­am­ple-set­ting) the best pos­si­ble course of ac­tion, in or­der to make a point about figur­ing out the best pos­si­ble course of ac­tion and then do­ing that thing, seems per­verse.

Non-obviousness

Effec­tive al­tru­ists of­ten ex­press sur­prise that the idea of effec­tive al­tru­ism only came about so re­cently. For in­stance, my stu­dent group re­cently hosted Elie Hassen­feld for a talk in which he made re­marks to that effect, and I’ve heard other peo­ple work­ing for EA or­ga­ni­za­tions ex­press the same sen­ti­ment. But no one seems to be ac­tu­ally wor­ried about this—just smug that they’ve figured out some­thing that no one else had.

The “mar­ket” for ideas is at least some­what effi­cient: most sim­ple, ob­vi­ous and cor­rect things get thought of fairly quickly af­ter it’s pos­si­ble to think them. If a meme as sim­ple as effec­tive al­tru­ism hasn’t taken root yet, we should at least try to un­der­stand why be­fore throw­ing our weight be­hind it. The ab­sence of such at­tempts—in other words, the fact that non-ob­vi­ous­ness doesn’t make effec­tive al­tru­ists wor­ried that they’re miss­ing some­thing—is a strong in­di­ca­tor against the “effec­tive al­tru­ists are ac­tu­ally try­ing” hy­poth­e­sis.

Effi­cient mar­kets for giving

It’s of­ten claimed that “non­prof­its are not a mar­ket for do­ing good; they’re a mar­ket for warm fuzzies”. This is used as jus­tifi­ca­tion for why it’s pos­si­ble to do im­mense amounts of good by donat­ing. How­ever, while it’s cer­tainly true that most donors aren’t ex­plic­itly try­ing to pur­chase util­ilty, there’s still a lot of money that is.

The Gates Foun­da­tion is an ex­am­ple of such an or­ga­ni­za­tion. They’re effec­tive­ness-minded and with $60 billion be­hind them. 80,000 Hours has already noted that they’ve prob­a­bly saved over 6 mil­lion lives with their vac­cine pro­grams alone—given that they’ve spent a rel­a­tively small part of their en­dow­ment, they must be get­ting a much bet­ter ex­change rate than our cur­rent best guesses.

So why not just donate to the Gates Foun­da­tion? Effec­tive al­tru­ists need a bet­ter ac­count of the “mar­ket in­effi­cien­cies” that they’re ex­ploit­ing that Gates isn’t. Why didn’t the Gates Foun­da­tion fund the Against Malaria Foun­da­tion, GiveWell’s top char­ity, when it’s in one of their main re­search ar­eas? It seems im­plau­si­ble that the an­swer is sim­ple in­com­pe­tence or the like.

A gen­eral rule of mar­kets is that if you don’t know what your edge is, you’re the sucker. Many effec­tive al­tru­ists, when asked what their edge is, give some an­swer along the lines of “ac­tu­ally be­ing strate­gic/​think­ing about util­ity/​car­ing about re­sults”, and stop think­ing there. This isn’t a com­pel­ling case: as men­tioned be­fore, it’s not clear why no one else is do­ing these things.

In­con­sis­tent at­ti­tude to­wards rigor

Effec­tive al­tru­ists in­sist on ex­traor­di­nary rigor in their char­ity recom­men­da­tions—cf. for in­stance GiveWell’s work. Yet for many an­cillary prob­lems—donat­ing now vs. later, choos­ing a ca­reer, and de­cid­ing how “meta” to go (be­tween di­rect work, earn­ing to give, do­ing ad­vo­cacy, and donat­ing to ad­vo­cacy), to name a few—they seem happy to choose be­tween the not-ob­vi­ously-wrong al­ter­na­tives based on in­tu­ition and gut feel­ings.

Poor psy­cholog­i­cal understanding

John Sturm sug­gests, and I agree, that many of these is­sues are psy­cholog­i­cal in na­ture:

I think a lot of these prob­lems take root a com­mit­ment level is­sue:

I, for in­stance, am thrilled about chang­ing my men­tal­ity to­wards char­ity, not my men­tal­ity to­wards hav­ing kids. My first guess is that—from an EA and over­all eth­i­cal per­spec­tive—it would be a big mis­take for me to have kids (even af­ter tak­ing into ac­count the nor­mal EA ex­cuses about do­ing things for my­self). At least right now, though, I just don’t care that I’m ig­nor­ing my ethics and EA; I want to have kids and that’s that.

This is a case in which I’m not “be­ing lazy” so much as just not try­ing at all. But when some­one asks me about it, it’s eas­ier for me to give some EA ex­cuse (like that hav­ing kids will make me hap­pier and more pro­duc­tive) that I don’t think is true—and then I look like I’m be­ing a lazy or care­less al­tru­ist rather than not be­ing one at all.

The model I’m build­ing is this: there are many differ­ent ar­eas in life where I could ap­ply EA. In some of them, I’m whole­heart­edly will­ing. In some of them, I’m not will­ing at all. Then there are two kinds of ar­eas where it looks like I’m be­ing a lazy EA: those where I’m will­ing and want to be a bet­ter EA… and those where I’m not will­ing but I’m just pre­tend­ing (to my­self or oth­ers or both).

The point of this: when we ask some­one to be a less lazy EA, we are (1) helping them do a bet­ter job at some­thing they want to do, and (2) try­ing to make them ei­ther do more than they want to or ad­mit they are “bad”.

In gen­eral, most effec­tive al­tru­ists re­spond to deep con­flicts be­tween effec­tive al­tru­ism and other goals in one of the fol­low­ing ways:

  1. Un­con­sciously re­solve the cog­ni­tive dis­so­nance with mo­ti­vated rea­son­ing: “it’s clearly my com­par­a­tive ad­van­tage to spread effec­tive al­tru­ism through po­etry!”

  2. De­liber­ately and know­ingly use mo­ti­vated rea­son­ing: “dear Face­book group, what are the best util­i­tar­ian ar­gu­ments in fa­vor of be­com­ing an EA poet?”

  3. Take the eas­iest “hon­est” way out: “I wouldn’t be psy­cholog­i­cally able to do effec­tive al­tru­ism if it forced me to go into fi­nance in­stead of writ­ing po­etry, so I’ll be­come an effec­tive al­tru­ist poet in­stead”.

The third is de­bat­ably defen­si­ble—though, for a com­mu­nity that pur­ports to put stock in ra­tio­nal­ity and self-im­prove­ment, effec­tive al­tru­ists have shown sur­pris­ingly lit­tle in­ter­est in self-mod­ifi­ca­tion to have more al­tru­is­tic in­ten­tions. This seems ob­vi­ously wor­thy of fur­ther work.

Fur­ther­more, EA norms do not pro­scribe even the first two, lead­ing to a group norm that doesn’t cause peo­ple to no­tice when they’re en­gag­ing in a cer­tain amount of mo­ti­vated cog­ni­tion. This is quite toxic to the move­ment’s abil­ity to con­verge on the truth. (As be­fore, effec­tive al­tru­ists are still bet­ter than the gen­eral pop­u­la­tion at this; the core EA prin­ci­ples are strong enough to make peo­ple no­tice the most ob­vi­ous mo­ti­vated cog­ni­tion that ob­vi­ously runs afoul of them. But that’s not nearly good enough.)

His­tor­i­cal analogues

With the par­tial ex­cep­tion of GiveWell’s his­tory of philan­thropy pro­ject, there’s been no re­search into good his­tor­i­cal out­side views. Although there are no di­rect pre­cur­sors of effec­tive al­tru­ism (wor­ry­ing in its own right; see above), there is one no­tably similar move­ment: com­mu­nism, where the idea of “from each ac­cord­ing to his abil­ity, to each ac­cord­ing to his needs” origi­nated. Com­mu­nism is also no­table for its var­i­ous ab­ject failures. Effec­tive al­tru­ists need to be more wor­ried about how they will avoid failures of a similar class—and in gen­eral they need to be more aware of the pit­falls, as well as the benefits, of be­ing an in­creas­ingly large so­cial move­ment.

Aaron Tucker elab­o­rates bet­ter than I could:

In par­tic­u­lar, Com­mu­nism/​So­cial­ism was a move­ment that was started by philoso­phers, then con­tinued by tech­nocrats, where they thought rea­son and plan­ning could make the world much bet­ter, and that if they co­or­di­nated to take ac­tion to fix ev­ery­thing, they could elimi­nate poverty, dis­ease, etc.

Marx to­tally got the “ac­tu­ally try­ing vs. pre­tend­ing to try” dis­tinc­tion AFAICT (“Philoso­phers have only ex­plained the world, but the real prob­lem is to change it” is a quote of his), and he re­ally strongly rails against peo­ple who un­re­flec­tively try to fix things in ways that make sense to the cul­ture they’re start­ing from—the prob­lem isn’t that the bour­geoisie aren’t try­ing to help peo­ple, it’s that the only con­cep­tion of help that the bour­geoisie have is one that’s mostly epiphe­nom­e­nal to ac­tu­ally im­prov­ing the lives of the pro­le­tariat—giv­ing them nice boureoisie things like ed­u­ca­tion and vot­ing rights, but not do­ing any­thing to im­prove the ma­te­rial con­di­tion of their life, or fix the prob­lems of why they don’t have those in the first place, and don’t just make them them­selves.

So if Marx got the pre­tend/​ac­tu­ally try dis­tinc­tion, and his fol­low­ers took over coun­tries, and they had a ton of awe­some tech­nocrats, it seems like it’s the perfect EA thing, and it to­tally didn’t work.

Monoculture

Effec­tive al­tru­ists are not very di­verse. The vast ma­jor­ity are white, “up­per-mid­dle-class”, in­tel­lec­tu­ally and philo­soph­i­cally in­clined, from a de­vel­oped coun­try, etc. (and I think it skews sig­nifi­cantly male as well, though I’m less sure of this). And as much as the mul­ti­ple-per­spec­tives ar­gu­ment for di­ver­sity is hack­neyed by this point, it seems quite ger­mane, es­pe­cially when con­sid­er­ing e.g. global health in­ter­ven­tions, whose benefi­cia­ries are cul­turally very for­eign to us.

Effec­tive al­tru­ists are not very hu­man­is­ti­cally aware ei­ther. EA came out of an­a­lytic philos­o­phy and spread from there to math and com­puter sci­ence. As such, they are too hasty to dis­miss many ar­gu­ments as moral-rel­a­tivist post­mod­ernist fluff, e.g. that effec­tive al­tru­ists are pro­mot­ing cul­tural im­pe­ri­al­ism by forc­ing a West­ern­ized con­cep­tion of “the good” onto peo­ple they’re try­ing to help. Even if EAs are quite con­fi­dent that the util­i­tar­ian/​re­duc­tion­ist/​ra­tio­nal­ist wor­ld­view is cor­rect, the out­side view is that re­ally en­gag­ing with a greater di­ver­sity of opinions is very helpful.

Com­mu­nity problems

The dis­course around effec­tive al­tru­ism in e.g. the Face­book group used to be of fairly high qual­ity. But as the move­ment grows, the tra­di­tional venues of dis­cus­sion are get­ting in­un­dated with new peo­ple who haven’t ab­sorbed the norms of dis­cus­sion or stan­dards of proof yet. If this is not rec­tified quickly, the EA com­mu­nity will cease to be use­ful at all: there will be no venue in which a group truth-seek­ing pro­cess can op­er­ate. Yet no­body seems to be aware of the mag­ni­tude of this prob­lem. There have been some half-hearted at­tempts to fix it, but noth­ing much has come of them.

Move­ment build­ing issues

The whole point of hav­ing an effec­tive al­tru­ism “move­ment” is that it’ll be big­ger than the sum of its parts. Be­ing or­ga­nized as a move­ment should turn effec­tive al­tru­ism into the kind of large, semi-mono­lithic ac­tor that can ac­tu­ally get big stuff done, not just make marginal con­tri­bu­tions.

But in prac­tice, large move­ments and truth-seek­ing hardly ever go to­gether. As move­ments grow, they get more “epistemic in­er­tia”: it be­comes much harder for them to up­date on ev­i­dence. This is be­cause they have to rely on so­cial meth­ods to prop­a­gate their memes rather than truth-seek­ing be­hav­ior. But peo­ple who have been drawn to EA by so­cial pres­sure rather than truth-seek­ing take much longer to change their be­liefs, so once the move­ment reaches a crit­i­cal mass of them, it will be­come difficult for it to up­date on new ev­i­dence. As de­scribed above, this is already hap­pen­ing to effec­tive al­tru­ism with the ever-less-use­ful Face­book group.

Conclusion

I’ve pre­sented sev­eral ar­eas in which the effec­tive al­tru­ism move­ment fails to con­verge on truth through a com­bi­na­tion of the fol­low­ing effects:

  1. Effec­tive al­tru­ists “stop think­ing” too early and satis­fice for “doesn’t ob­vi­ously con­flict with EA prin­ci­ples” rather than op­ti­miz­ing for “in­creases util­ity”. (For in­stance, they choose dona­tions poorly due to this effect.)

  2. Effec­tive al­tru­ism puts strong de­mands on its prac­ti­tion­ers, and EA group norms do not ap­pro­pri­ately guard against mo­ti­vated cog­ni­tion to avoid them. (For ex­am­ple, this of­ten causes peo­ple to choose bad ca­reers.)

  3. Effec­tive al­tru­ists don’t no­tice im­por­tant ar­eas to look into, speci­fi­cally is­sues re­lated to “be­ing a suc­cess­ful move­ment” rather than “cor­rectly im­ple­ment­ing util­i­tar­i­anism”. (For in­stance, they ig­nore is­sues around group episte­mol­ogy, his­tor­i­cal prece­dents for the move­ment, move­ment di­ver­sity, etc.)

Th­ese prob­lems are wor­ry­ing on their own, but the lack of aware­ness of them is the real prob­lem. The mono­cul­ture is wor­ry­ing, but the lack­adaisi­cal at­ti­tude to­wards it is worse. The lack of rigor is un­for­tu­nate, but the fact that peo­ple haven’t no­ticed it is the real prob­lem.

Either effec­tive al­tru­ists don’t yet re­al­ize that they’re sub­ject to the failure modes of any large move­ment, or they don’t feel mo­ti­va­tion to do the bor­ing leg­work of e.g. en­gag­ing with view­points that your in­side view says are an­noy­ing but that the out­side view says are use­ful on ex­pec­ta­tion. Either way, this be­speaks wor­ry­ing things about the move­ment’s stay­ing power.

More im­por­tantly, it also in­di­cates an epistemic failure on the part of effec­tive al­tru­ists. The fact that no one else within EA has done a sub­stan­tial cri­tique yet is a huge red flag. If effec­tive al­tru­ists aren’t aware of strong cri­tiques of the EA move­ment, why aren’t they look­ing for them? This sug­gests that, con­trary to the em­pha­sis on ra­tio­nal­ity within the move­ment, many effec­tive al­tru­ists’ be­liefs are based on so­cial, rather than truth-seek­ing, be­hav­ior.

If it doesn’t solve these prob­lems, effec­tive-al­tru­ism-the-move­ment won’t help me achieve any more good than I could in­di­vi­d­u­ally. All it will do is add epistemic in­er­tia, as it takes more effort to shift the EA con­sen­sus than to up­date my in­di­vi­d­ual be­liefs.

Are these prob­lems solv­able?

It seems to me that the third is­sue above (lack of self-aware­ness as a so­cial move­ment) sub­sumes the other two: if effec­tive al­tru­ism as a move­ment were suffi­ciently in­tro­spec­tive, it could prob­a­bly no­tice and solve the other two prob­lems, as well as fu­ture ones that will un­doubt­edly crop up.

Hence, I pro­pose an ad­di­tional prin­ci­ple of effec­tive al­tru­ism. In ad­di­tion to be­ing al­tru­is­tic, max­i­miz­ing, egal­i­tar­ian, and con­se­quen­tial­ist we should be self-aware: we should think care­fully about the is­sues as­so­ci­ated with be­ing a suc­cess­ful move­ment, in or­der to make sure that we can move be­yond the ob­vi­ous ap­pli­ca­tions of EA prin­ci­ples and come up with non-triv­ially bet­ter ways to im­prove the world.

Acknowledgments

Thanks to Nick Bostrom for coin­ing the idea of a hy­po­thet­i­cal apos­tasy, and to Will Eden for men­tion­ing it re­cently.

Thanks to Michael Vas­sar, Aaron Tucker and An­drew Ret­tek for in­spiring var­i­ous of these points.

Thanks to Aaron Tucker and John Sturm for read­ing an ad­vance draft of this post and giv­ing valuable feed­back.

Cross-posted from http://​​www.benkuhn.net/​​ea-cri­tique since I want out­side per­spec­tives, and also LW’s com­ments are nicer than mine.