Ef­fect­ive al­tru­ism is self-recommending

A par­ent I know re­ports (some de­tails an­onym­ized):

Re­cently we bought my 3-year-old daugh­ter a “be­ha­vior chart,” in which she can earn stick­ers for achieve­ments like not throw­ing tan­trums, eat­ing fruits and ve­get­ables, and go­ing to sleep on time. We suc­cess­fully im­pressed on her that a ma­jor goal each day was to earn as many stick­ers as pos­sible.

This morn­ing, though, I found her just plas­ter­ing her en­tire be­ha­vior chart with stick­ers. She genu­inely seemed to think I’d be proud of how many stick­ers she now had.

The Ef­fect­ive Al­tru­ism move­ment has now entered this ex­tremely cute stage of cog­nit­ive de­vel­op­ment. EA is more than three years old, but in­sti­tu­tions age dif­fer­ently than in­di­vidu­als.

What is a con­fid­ence game?

In 2009, in­vest­ment man­ager and con artist Bernie Madoff pled guilty to run­ning a massive fraud, with $50 bil­lion in fake re­turn on in­vest­ment, hav­ing out­right em­bezzled around $18 bil­lion out of the $36 bil­lion in­vestors put into the fund. Only a couple of years earlier, when my grand­father was still alive, I re­mem­ber him telling me about how Madoff was a genius, get­ting his in­vestors a con­sist­ent high re­turn, and about how he wished he could be in on it, but Madoff wasn’t ac­cept­ing ad­di­tional in­vestors.

What Madoff was run­ning was a clas­sic Ponzi scheme. In­vestors gave him money, and he told them that he’d got­ten them an ex­cep­tion­ally high re­turn on in­vest­ment, when in fact he had not. But be­cause he prom­ised to be able to do it again, his in­vestors mostly re­in­ves­ted their money, and more people were ex­cited about get­ting in on the deal. There was more than enough money to cover the few people who wanted to take money out of this amaz­ing op­por­tun­ity.

Ponzi schemes, pyr­amid schemes, and spec­u­lat­ive bubbles are all situ­ations in in­vestors’ ex­pec­ted profits are paid out from the money paid in by new in­vestors, in­stead of any in­de­pend­ently prof­it­able ven­ture. Ponzi schemes are cent­rally man­aged – the per­son run­ning the scheme rep­res­ents it to in­vestors as le­git­im­ate, and takes re­spons­ib­il­ity for find­ing new in­vestors and pay­ing off old ones. In pyr­amid schemes such as multi-level-mar­ket­ing and chain let­ters, each gen­er­a­tion of in­vestor re­cruits new in­vestors and profits from them. In spec­u­lat­ive bubbles, there is no formal struc­ture prop­ping up the scheme, only a com­mon, mu­tu­ally re­in­for­cing set of ex­pect­a­tions among spec­u­lat­ors driv­ing up the price of some­thing that was already for sale.

The gen­eral situ­ation in which someone sets them­self up as the re­pos­it­ory of oth­ers’ con­fid­ence, and uses this as lever­age to ac­quire in­creas­ing in­vest­ment, can be called a con­fid­ence game.

Some of the most iconic Ponzi schemes blew up quickly be­cause they prom­ised wildly un­real­istic growth rates. This had three un­desir­able ef­fects for the people run­ning the schemes. First, it at­trac­ted too much at­ten­tion – too many people wanted into the scheme too quickly, so they rap­idly ex­hausted sources of new cap­ital. Se­cond, be­cause their rates of re­turn were im­plaus­ibly high, they made them­selves tar­gets for scru­tiny. Third, the ex­tremely high rates of re­turn them­selves caused their prom­ises to quickly out­pace what they could plaus­ibly re­turn to even a small share of their in­vestor vic­tims.

Madoff was care­ful to avoid all these prob­lems, which is why his scheme las­ted for nearly half a cen­tury. He only prom­ised plaus­ibly high re­turns (around 10% an­nu­ally) for a suc­cess­ful hedge fund, es­pe­cially if it was il­leg­ally en­gaged in in­sider trad­ing, rather than the sort of im­plaus­ibly high re­turns typ­ical of more blatant Ponzi schemes. (Charles Ponzi prom­ised to double in­vestors’ money in 90 days.) Madoff showed re­luct­ance to ac­cept new cli­ents, like any other fund man­ager who doesn’t want to get too big for their trad­ing strategy.

He didn’t plaster stick­ers all over his be­ha­vior chart – he put a reas­on­able num­ber of stick­ers on it. He played a long game.

Not all con­fid­ence games are in­her­ently bad. For in­stance, the US na­tional pen­sion sys­tem, So­cial Se­cur­ity, op­er­ates as a kind of Ponzi scheme, it is not ob­vi­ously un­sus­tain­able, and many people con­tinue to be glad that it ex­ists. Nom­in­ally, when people pay So­cial Se­cur­ity taxes, the money is in­ves­ted in the so­cial se­cur­ity trust fund, which holds in­terest-bear­ing fin­an­cial as­sets that will be used to pay out be­ne­fits in their old age. In this re­spect it looks like an or­din­ary pen­sion fund.

However, the fin­an­cial as­sets are US Treas­ury bonds. There is no in­de­pend­ently prof­it­able ven­ture. The Federal Govern­ment of the Un­ited States of Amer­ica is quite lit­er­ally writ­ing an IOU to it­self, and then spend­ing the money on cur­rent ex­pendit­ures, in­clud­ing pay­ing out cur­rent So­cial Se­cur­ity be­ne­fits.

The Federal Govern­ment, of course, can write as large an IOU to it­self as it wants. It could make all tax rev­en­ues part of the So­cial Se­cur­ity pro­gram. It could is­sue new Treas­ury bonds and gift them to So­cial Se­cur­ity. None of this would in­crease its abil­ity to pay out So­cial Se­cur­ity be­ne­fits. It would be an empty ex­er­cise in put­ting stick­ers on its own chart.

If the Federal gov­ern­ment loses the abil­ity to col­lect enough taxes to pay out so­cial se­cur­ity be­ne­fits, there is no ad­di­tional ca­pa­city to pay rep­res­en­ted by US Treas­ury bonds. What we have is an im­plied prom­ise to pay out fu­ture be­ne­fits, backed by the ex­pect­a­tion that the gov­ern­ment will be able to col­lect taxes in the fu­ture, in­clud­ing So­cial Se­cur­ity taxes.

There’s noth­ing ne­ces­sar­ily wrong with this, ex­cept that the mech­an­ism by which So­cial Se­cur­ity is fun­ded is ob­scured by fin­an­cial en­gin­eer­ing. However, this mis­dir­ec­tion should raise at least some doubts as to the un­der­ly­ing sus­tain­ab­il­ity or de­sirab­il­ity of the com­mit­ment. In fact, this scheme was ad­op­ted spe­cific­ally to give people the im­pres­sion that they had some sort of prop­erty rights over their so­cial Se­cur­ity Pen­sion, in or­der to make the pro­gram polit­ic­ally dif­fi­cult to elim­in­ate. Once people have “bought in” to a pro­gram, they will be re­luct­ant to treat their prior con­tri­bu­tions as sunk costs, and will­ing to in­vest ad­di­tional re­sources to sal­vage their in­vest­ment, in ways that may make them in­creas­ingly re­li­ant on it.

Not all con­fid­ence games are in­trins­ic­ally bad, but du­bi­ous pro­grams be­ne­fit the most from be­ing set up as con­fid­ence games. More gen­er­ally, bad pro­grams are the ones that be­ne­fit the most from be­ing al­lowed to fiddle with their own ac­count­ing. As Daniel Davies writes, in The D-Squared Di­gest One Minute MBA—Avoid­ing Pro­jects Pur­sued By Morons 101:

Good ideas do not need lots of lies told about them in or­der to gain pub­lic ac­cept­ance. I was first made aware of this dur­ing an ac­count­ing class. We were dis­cuss­ing the sub­ject of ac­count­ing for stock op­tions at tech­no­logy com­pan­ies. […] One side (mainly tech­no­logy com­pan­ies and their lob­by­ists) held that stock op­tion grants should not be treated as an ex­pense on pub­lic policy grounds; treat­ing them as an ex­pense would dis­cour­age com­pan­ies from grant­ing them, and stock op­tions were a vi­tal com­pens­a­tion tool that in­centiv­ised per­form­ance, re­war­ded dy­nam­ism and in­nov­a­tion and cre­ated vast amounts of value for Amer­ica and the world. The other side (mainly people like War­ren Buf­fet) held that stock op­tions looked aw­fully like a massive blag car­ried out my man­age­ment at the ex­pense of share­hold­ers, and that the proper place to re­cord such blags was the P&L ac­count.

Our lec­turer, in sum­ming up the de­bate, made the not un­reas­on­able point that if stock op­tions really were a fant­astic tool which un­leashed the cre­at­ive power in every em­ployee, every­one would want to ex­pense as many of them as pos­sible, the bet­ter to boast about how in­nov­at­ive, em­powered and fant­astic they were. Since the tech com­pan­ies’ point of view ap­peared to be that if they were ever forced to ac­count hon­estly for their op­tion grants, they would quickly stop mak­ing them, this offered de­cent prima facie evid­ence that they weren’t, really, all that fant­astic.

However, I want to gen­er­al­ize the concept of con­fid­ence games from the do­main of fin­an­cial cur­rency, to the do­main of so­cial credit more gen­er­ally (of which money is a par­tic­u­lar form that our so­ci­ety com­monly uses), and in par­tic­u­lar I want to talk about con­fid­ence games in the cur­rency of credit for achieve­ment.

If I were ap­ply­ing for a very im­port­ant job with great re­spons­ib­il­it­ies, such as Pres­id­ent of the Un­ited States, CEO of a top cor­por­a­tion, or head or board mem­ber of a ma­jor AI re­search in­sti­tu­tion, I could be ex­pec­ted to have some rel­ev­ant prior ex­per­i­ence. For in­stance, I might have had some suc­cess man­aging a sim­ilar, smal­ler in­sti­tu­tion, or serving the same in­sti­tu­tion in a lesser ca­pa­city. More gen­er­ally, when I make a bid for con­trol over some­thing, I am im­pli­citly claim­ing that I have enough so­cial credit – enough of a track re­cord – that I can be ex­pec­ted to do good things with that con­trol.

In gen­eral, if someone has done a lot, we should ex­pect to see an ice­berg pat­tern where a small eas­ily-vis­ible part sug­gests a lot of solid but harder-to-verify sub­stance un­der the sur­face. One might be temp­ted to make a habit of im­put­ing a much lar­ger ice­berg from the com­bin­a­tion of a small floaty bit, and prom­ises. But, a small eas­ily-vis­ible part with claims of a lot of harder-to-see sub­stance is easy to mimic without ac­tu­ally do­ing the work. As Davies con­tin­ues:

The Vital Im­port­ance of Audit. Em­phas­ised over and over again. Brealey and My­ers has a sec­tion on this, in which they re­mind cal­low stu­dents that like back­ing-up one’s com­puter files, this is a les­son that every­one seems to have to learn the hard way. Basic­ally, it’s been shown time and again and again; com­pan­ies which do not audit com­pleted pro­jects in or­der to see how ac­cur­ate the ori­ginal pro­jec­tions were, tend to get ex­actly the fore­casts and pro­jects that they de­serve. Com­pan­ies which have a cul­ture where there are no con­sequences for mak­ing dis­hon­est fore­casts, get the pro­jects they de­serve. Com­pan­ies which al­loc­ate blank cheques to man­age­ment teams with a proven re­cord of fail­ure and men­dacity, get what they de­serve.

If you can in­de­pend­ently put stick­ers on your own chart, then your chart is no longer re­li­ably track­ing some­thing ex­tern­ally veri­fied. If fore­casts are not checked and tracked, or fore­casters are not con­sequently held ac­count­able for their fore­casts, then there is no reason to be­lieve that as­sess­ments of fu­ture, on­go­ing, or past pro­grams are ac­cur­ate. Ad­opt­ing a wait-and-see at­ti­tude, in­sist­ing on audits for ac­tual res­ults (not just pre­dic­tions) be­fore in­vest­ing more, will def­in­itely slow down fund­ing for good pro­grams. But without it, most of your fund­ing will go to worth­less ones.

Open Phil­an­thropy, OpenAI, and closed val­id­a­tion loops

The Open Phil­an­thropy Pro­ject re­cently an­nounced a $30 mil­lion grant to the $1 bil­lion non­profit AI re­search or­gan­iz­a­tion OpenAI. This is the largest single grant it has ever made. The main point of the grant is to buy in­flu­ence over OpenAI’s fu­ture pri­or­it­ies; Holden Karnof­sky, Ex­ec­ut­ive Dir­ector of the Open Phil­an­thropy Pro­ject, is get­ting a seat on OpenAI’s board as part of the deal. This marks the second ma­jor shift in fo­cus for the Open Phil­an­thropy Pro­ject.

The first shift (back when it was just called GiveWell) was from try­ing to find the best already-ex­ist­ing pro­grams to fund (“pass­ive fund­ing”) to en­vi­sion­ing new pro­grams and work­ing with grantees to make them real­ity (“act­ive fund­ing”). The new shift is from fund­ing spe­cific pro­grams at all, to try­ing to take con­trol of pro­grams without any spe­cific plan.

To jus­tify the pass­ive fund­ing stage, all you have to be­lieve is that you can know bet­ter than other donors, among ex­ist­ing char­it­ies. For act­ive fund­ing, you have to be­lieve that you’re smart enough to eval­u­ate po­ten­tial pro­grams, just like a char­ity founder might, and pick ones that will out­per­form. But buy­ing con­trol im­plies that you think you’re so much bet­ter, that even be­fore you’ve eval­u­ated any pro­grams, if someone’s do­ing some­thing big, you ought to have a say.

When GiveWell moved from a pass­ive to an act­ive fund­ing strategy, it was re­ly­ing on the moral credit it had earned for its ex­tens­ive and well-re­garded char­ity eval­u­ations. The thing that was par­tic­u­larly ex­cit­ing about GiveWell was that they fo­cused on out­comes and ef­fi­ciency. They didn’t just fo­cus on the size or in­tens­ity of the prob­lem a char­ity was ad­dress­ing. They didn’t just look at fin­an­cial de­tails like over­head ra­tios. They asked the ques­tion a con­sequen­tial­ist cares about: for a given ex­pendit­ure of money, how much will this char­ity be able to im­prove out­comes?

However, when GiveWell tracks its im­pact, it does not track ob­ject­ive out­comes at all. It tracks in­puts: at­ten­tion re­ceived (in the form of vis­its to its web­site) and money moved on the basis of its re­com­mend­a­tions. In other words, its es­tim­ate of its own im­pact is based on the level of trust people have placed in it.

So, as GiveWell built out the Open Phil­an­thropy Pro­ject, its story was: We prom­ised to do some­thing great. As a res­ult, we were en­trus­ted with a fair amount of at­ten­tion and money. There­fore, we should be given more re­spons­ib­il­ity. We rep­res­en­ted our be­ha­vior as praise­worthy, and as a res­ult people put stick­ers on our chart. For this reason, we should be ad­vanced stick­ers against fu­ture days of praise­worthy be­ha­vior.

Then, as the Open Phil­an­thropy Pro­ject ex­plored act­ive fund­ing in more areas, its es­tim­ate of its own ef­fect­ive­ness grew. After all, it was fund­ing more spec­u­lat­ive, hard-to-meas­ure pro­grams, but a multi-bil­lion-dol­lar donor, which was largely re­ly­ing on the Open Phil­an­thropy Pro­ject’s opin­ions to as­sess ef­fic­acy (in­clud­ing its own ef­fic­acy), con­tin­ued to trust it.

What is miss­ing here is any ob­ject­ive track re­cord of be­ne­fits. What this looks like to me, is a long sort of con­fid­ence game – or, us­ing less mor­ally loaded lan­guage, a ven­ture with struc­tural re­li­ance on in­creas­ing amounts of lever­age – in the cur­rency of moral credit.

Ver­sion 0: GiveWell and pass­ive funding

First, there was GiveWell. GiveWell’s pur­pose was to find and vet evid­ence-backed char­it­ies. However, it re­cog­nized that char­it­ies know their own busi­ness best. It wasn’t try­ing to do bet­ter than the char­it­ies; it was try­ing to do bet­ter than the typ­ical char­ity donor, by be­ing more dis­cern­ing.

GiveWell’s think­ing from this phase is ex­em­pli­fied by co-founder Elie Hassen­feld’s Six tips for giv­ing like a pro:

When you give, give cash – no strings at­tached. You’re just a part-time donor, but the char­ity you’re sup­port­ing does this full-time and staff there prob­ably know a lot more about how to do their job than you do. If you’ve found a char­ity that you feel is ex­cel­lent – not just ac­cept­able – then it makes sense to trust the char­ity to make good de­cisions about how to spend your money.

GiveWell sim­il­arly tried to avoid dis­tort­ing char­it­ies’ be­ha­vior. Its job was only to eval­u­ate, not to in­ter­fere. To per­ceive, not to act. To find the best, and buy more of the same.

How did GiveWell as­sess its ef­fect­ive­ness in this stage? When GiveWell eval­u­ates char­it­ies, it es­tim­ates their cost-ef­fect­ive­ness in ad­vance. It as­sesses the pro­gram the char­ity is run­ning, through ex­per­i­mental evid­ence of the form of ran­dom­ized con­trolled tri­als. GiveWell also audits the char­ity to make sure they’re ac­tu­ally run­ning the pro­gram, and fig­ure out how much it costs as im­ple­men­ted. This is an ex­cel­lent, evid­ence-based way to gen­er­ate a pre­dic­tion of how much good will be done by mov­ing money to the char­ity.

As far as I can tell, these pre­dic­tions are un­tested.

One of GiveWell’s early top char­it­ies was Vil­lageReach, which helped Mozam­bi­que with TB im­mun­iz­a­tion lo­gist­ics. GiveWell es­tim­ated that Vil­lageReach could save a life for $1,000. But this char­ity is no longer re­com­men­ded. The pub­lic page says:

Vil­lageReach (www.vil­lagereach.org) was our top-rated or­gan­iz­a­tion for 2009, 2010 and much of 2011 and it has re­ceived over $2 mil­lion due to GiveWell’s re­com­mend­a­tion. In late 2011, we re­moved Vil­lageReach from our top-rated list be­cause we felt its pro­ject had lim­ited room for more fund­ing. As of Novem­ber 2012, we be­lieve that that this pro­ject may have room for more fund­ing, but we still prefer our cur­rent highest-rated char­it­ies above it.

GiveWell reana­lyzed the data it based its re­com­mend­a­tions on, but hasn’t pub­lished an after-the-fact ret­ro­spect­ive of long-run res­ults. I asked GiveWell about this by email. The re­sponse was that such an as­sess­ment was not pri­or­it­ized be­cause GiveWell had found im­ple­ment­a­tion prob­lems in Vil­lageReach’s scale-up work as well as reas­ons to doubt its ori­ginal con­clu­sion about the im­pact of the pi­lot pro­gram. It’s un­clear to me whether this has caused GiveWell to eval­u­ate char­it­ies dif­fer­ently in the fu­ture.

I don’t think someone look­ing at GiveWell’s page on Vil­lageReach would be likely to reach the con­clu­sion that GiveWell now be­lieves its ori­ginal re­com­mend­a­tion was likely er­ro­neous. GiveWell’s im­pact page con­tin­ues to count money moved to Vil­lageReach without any men­tion of the re­trac­ted re­com­mend­a­tion. If we as­sume that the point of track­ing money moved is to track the be­ne­fit of mov­ing money from worse to bet­ter uses, then re­pu­di­ated pro­grams ought to be coun­ted against the total, as costs, rather than to­wards it.

GiveWell has re­com­men­ded the Against Malaria Found­a­tion for the last sev­eral years as a top char­ity. AMF dis­trib­utes long-last­ing in­sect­icide-treated bed nets to pre­vent mos­qui­tos from trans­mit­ting mal­aria to hu­mans. Its eval­u­ation of AMF does not men­tion any dir­ect evid­ence, pos­it­ive or neg­at­ive, about what happened to mal­aria rates in the areas where AMF op­er­ated. (There is a dis­cus­sion of the evid­ence that the bed nets were in fact de­livered and used.) In the sup­ple­ment­ary in­form­a­tion page, how­ever, we are told:

Pre­vi­ously, AMF ex­pec­ted to col­lect data on mal­aria case rates from the re­gions in which it fun­ded LLIN dis­tri­bu­tions: […] In 2016, AMF shared mal­aria case rate data […] but we have not pri­or­it­ized ana­lyz­ing it closely. AMF be­lieves that this data is not high qual­ity enough to re­li­ably in­dic­ate ac­tual trends in mal­aria case rates, so we do not be­lieve that the fact that AMF col­lects mal­aria case rate data is a con­sid­er­a­tion in AMF’s fa­vor, and do not plan to con­tinue to track AMF’s pro­gress in col­lect­ing mal­aria case rate data.

The data was noisy, so they simply stopped check­ing whether AMF’s bed net dis­tri­bu­tions do any­thing about mal­aria.

If we want to know the size of the im­prove­ment made by GiveWell in the de­vel­op­ing world, we have their pre­dic­tions about cost-ef­fect­ive­ness, an audit trail veri­fy­ing that work was per­formed, and their dir­ect meas­ure­ment of how much money people gave be­cause they trus­ted GiveWell. The pre­dic­tions on the fi­nal tar­get – im­proved out­comes – have not been tested.

GiveWell is ac­tu­ally do­ing un­usu­ally well as far as ma­jor fun­ders go. It sticks to de­scrib­ing things it’s ac­tu­ally re­spons­ible for. By con­trast, the Gates Found­a­tion, in a re­port to War­ren Buf­fet claim­ing to de­scribe its im­pact, simply de­scribed over­all im­prove­ment in the de­vel­op­ing world, a very small rhet­or­ical step from claim­ing credit for 100% of the im­prove­ment. GiveWell at least sticks to facts about GiveWell’s own ef­fects, and this is to its credit. But, it fo­cuses on costs it has been able to im­pose, not be­ne­fits it has been able to cre­ate.

The Centre for Ef­fect­ive Al­tru­ism’s Wil­liam MacAskill made a re­lated point back in 2012, though he talked about the lack of any sort of formal out­side val­id­a­tion or audit, rather than fo­cus­ing on em­pir­ical val­id­a­tion of out­comes:

As far as I know, GiveWell haven’t com­mis­sioned a thor­ough ex­ternal eval­u­ation of their re­com­mend­a­tions. […] This sur­prises me. Whereas busi­nesses have a nat­ural feed­back mech­an­ism, namely profit or loss, re­search of­ten doesn’t, hence the need for peer-re­view within aca­demia. This con­cern, when it comes to char­ity-eval­u­ation, is even greater. If GiveWell’s ana­lysis and re­com­mend­a­tions had ma­jor flaws, or were sys­tem­at­ic­ally biased in some way, it would be chal­len­ging for out­siders to work this out without a thor­ough in­de­pend­ent eval­u­ation. For­tunately, GiveWell has the re­sources to, for ex­ample, em­ploy two top de­vel­op­ment eco­nom­ists to each do an in­de­pend­ent re­view of their re­com­mend­a­tions and the sup­port­ing re­search. This would make their re­com­mend­a­tions more ro­bust at a reas­on­able cost.

GiveWell’s page on self-eval­u­ation says that it dis­con­tin­ued ex­ternal re­views in August 2013. This page links to an ex­plan­a­tion of the de­cision, which con­cludes:

We con­tinue to be­lieve that it is im­port­ant to en­sure that our work is sub­jec­ted to in-depth scru­tiny. However, at this time, the scru­tiny we’re nat­ur­ally re­ceiv­ing – com­bined with the high costs and lim­ited ca­pa­city for formal ex­ternal eval­u­ation – make us in­clined to post­pone ma­jor ef­fort on ex­ternal eval­u­ation for the time be­ing.

That said,

  • >If someone vo­lun­teered to do (or fa­cil­it­ate) formal ex­ternal eval­u­ation, we’d wel­come this and would be happy to prom­in­ently post or link to cri­ti­cism.

  • We do in­tend even­tu­ally to re-in­sti­tute formal ex­ternal eval­u­ation.

Four years later, as­sess­ing the cred­ib­il­ity of this as­sur­ance is left as an ex­er­cise for the reader.

Ver­sion 1: GiveWell Labs and act­ive funding

Then there was GiveWell Labs, later called the Open Phil­an­thropy Pro­ject. It looked into more po­ten­tial phil­an­thropic causes, where the evid­ence base might not be as cut-and-dried as that for the GiveWell top char­it­ies. One thing they learned was that in many areas, there simply weren’t shovel-ready pro­grams ready for fund­ing – a fun­der has to play a more act­ive role. This shift was de­scribed by GiveWell co-founder Holden Karnof­sky in his 2013 blog post, Chal­lenges of pass­ive fund­ing:

By “pass­ive fund­ing,” I mean a dy­namic in which the fun­der’s role is to re­view oth­ers’ pro­pos­als/​ideas/​ar­gu­ments and pick which to fund, and by “act­ive fund­ing,” I mean a dy­namic in which the fun­der’s role is to par­ti­cip­ate in – or lead – the de­vel­op­ment of a strategy, and find part­ners to “im­ple­ment” it. Act­ive fun­ders, in other words, are par­ti­cip­at­ing at some level in “man­age­ment” of part­ner or­gan­iz­a­tions, whereas pass­ive fun­ders are merely choos­ing between plans that other non­profits have already come up with.

My in­stinct is gen­er­ally to try the most “pass­ive” ap­proach that’s feas­ible. Broadly speak­ing, it seems that a good part­ner or­gan­iz­a­tion will gen­er­ally know their field and en­vir­on­ment bet­ter than we do and there­fore be best po­si­tioned to design strategy; in ad­di­tion, I’d ex­pect a pro­ject to go bet­ter when its im­ple­menter has fully bought into the plan as op­posed to car­ry­ing out what the fun­der wants. However, (a) this philo­sophy seems to con­trast heav­ily with how most ex­ist­ing ma­jor fun­ders op­er­ate; (b) I’ve seen mul­tiple reas­ons to be­lieve the “act­ive” ap­proach may have more re­l­at­ive mer­its than we had ori­gin­ally an­ti­cip­ated. […]

  • In the non­profit world of today, it seems to us that fun­der in­terests are ma­jor drivers of which ideas that get pro­posed and fleshed out, and there­fore, as a fun­der, it’s im­port­ant to ex­press in­terests rather than try­ing to be fully “pass­ive.”

  • While we still wish to err on the side of be­ing as “pass­ive” as pos­sible, we are re­cog­niz­ing the im­port­ance of clearly ar­tic­u­lat­ing our val­ues/​strategy, and also re­cog­niz­ing that an area can be un­der­fun­ded even if we can’t eas­ily find shovel-ready fund­ing op­por­tun­it­ies in it.

GiveWell earned some cred­ib­il­ity from its novel, evid­ence-based out­come-ori­ented ap­proach to char­ity eval­u­ation. But this cred­ib­il­ity was already – and still is – a sort of loan. We have GiveWell’s pre­dic­tions or prom­ises of cost ef­fect­ive­ness in terms of out­comes, and we have fig­ures for money moved, from which we can in­fer how much we were prom­ised in im­proved out­comes. As far as I know, no one’s gone back and checked whether those prom­ises turned out to be true.

In the mean­time, GiveWell then lever­aged this cred­ib­il­ity by ex­tend­ing its meth­ods into more spec­u­lat­ive do­mains, where less was check­able, and donors had to put more trust in the sub­ject­ive judg­ment of GiveWell ana­lysts. This was called GiveWell Labs. At the time, this sort of com­poun­ded lever­age may have been sens­ible, but it’s im­port­ant to track whether a debt has been paid off or merely rolled over.

Ver­sion 2: The Open Phil­an­thropy Pro­ject and con­trol-seeking

Fin­ally, the Open Phil­an­thropy made its largest-ever single grant to pur­chase its founder a seat on a ma­jor or­gan­iz­a­tion’s board. This rep­res­ents a trans­ition from mere act­ive fund­ing to overtly pur­chas­ing in­flu­ence:

The Open Phil­an­thropy Pro­ject awar­ded a grant of $30 mil­lion ($10 mil­lion per year for 3 years) in gen­eral sup­port to OpenAI. This grant ini­ti­ates a part­ner­ship between the Open Phil­an­thropy Pro­ject and OpenAI, in which Holden Karnof­sky (Open Phil­an­thropy’s Ex­ec­ut­ive Dir­ector, “Holden” through­out this page) will join OpenAI’s Board of Dir­ect­ors and, jointly with one other Board mem­ber, over­see OpenAI’s safety and gov­ernance work.

We ex­pect the primary be­ne­fits of this grant to stem from our part­ner­ship with OpenAI, rather than simply from con­trib­ut­ing fund­ing to­ward OpenAI’s work. While we would also ex­pect gen­eral sup­port for OpenAI to be likely be­ne­fi­cial on its own, the case for this grant hinges on the be­ne­fits we an­ti­cip­ate from our part­ner­ship, par­tic­u­larly the op­por­tun­ity to help play a role in OpenAI’s ap­proach to safety and gov­ernance is­sues.

Clearly the value pro­pos­i­tion is not in­creas­ing avail­able funds for OpenAI, if OpenAI’s founders’ bil­lion-dol­lar com­mit­ment to it is real:

Sam, Greg, Elon, Reid Hoff­man, Jes­sica Liv­ing­ston, Peter Thiel, Amazon Web Ser­vices (AWS), In­fosys, and YC Re­search are donat­ing to sup­port OpenAI. In total, these fun­ders have com­mit­ted $1 bil­lion, al­though we ex­pect to only spend a tiny frac­tion of this in the next few years.

The Open Phil­an­thropy Pro­ject is neither us­ing this money to fund pro­grams that have a track re­cord of work­ing, nor to fund a spe­cific pro­gram that it has prior reason to ex­pect will do good. Rather, it is buy­ing con­trol, in the hope that Holden will be able to per­suade OpenAI not to des­troy the world, be­cause he knows bet­ter than OpenAI’s founders.

How does the Open Phil­an­thropy Pro­ject know that Holden knows bet­ter? Well, it’s done some act­ive fund­ing of pro­grams it ex­pects to work out. It ex­pects those pro­grams to work out be­cause they were ap­proved by a pro­cess sim­ilar to the one used by GiveWell to find char­it­ies that it ex­pects to save lives.

If you want to ac­quire con­trol over some­thing, that im­plies that you think you can man­age it more sens­ibly than who­ever is in con­trol already. Thus, buy­ing con­trol is a claim to have su­per­ior judg­ment—not just over oth­ers fund­ing things (the ori­ginal GiveWell pitch), but over those be­ing fun­ded.

In a foot­note to the very post an­noun­cing the grant, the Open Phil­an­thropy Pro­ject notes that it has his­tor­ic­ally tried to avoid ac­quir­ing lever­age over or­gan­iz­a­tions it sup­ports, pre­cisely be­cause it’s not sure it knows bet­ter:

For now, we note that provid­ing a high pro­por­tion of an or­gan­iz­a­tion’s fund­ing may cause it to be de­pend­ent on us and ac­count­able primar­ily to us. This may mean that we come to be seen as more re­spons­ible for its ac­tions than we want to be; it can also mean we have to choose between provid­ing bad and pos­sibly dis­tort­ive guid­ance/​feed­back (un­bal­anced by other stake­hold­ers’ guid­ance/​feed­back) and leav­ing the or­gan­iz­a­tion with es­sen­tially no ac­count­ab­il­ity.

This seems to de­scribe two main prob­lems in­tro­duced by be­com­ing a dom­in­ant fun­der:

  1. People might ac­cur­ately at­trib­ute causal re­spons­ib­il­ity for some of the or­gan­iz­a­tion’s con­duct to the Open Phil­an­thropy Pro­ject.

  2. The Open Phil­an­thropy Pro­ject might in­flu­ence the or­gan­iz­a­tion to be­have dif­fer­ently than it oth­er­wise would.

The first seems ob­vi­ously silly. I’ve been try­ing to cor­rect the im­bal­ance where Open Phil is cri­ti­cized mainly when it makes grants, by cri­ti­ciz­ing it for hold­ing onto too much money.

The second really is a cost as well as a be­ne­fit, and the Open Phil­an­thropy Pro­ject has been ab­so­lutely cor­rect to re­cog­nize this. This is the sort of thing GiveWell has con­sist­ently got­ten right since the be­gin­ning and it de­serves credit for mak­ing this prin­ciple clear and – un­til now – liv­ing up to it.

But dis­com­fort with be­ing dom­in­ant fun­ders seems in­con­sist­ent with buy­ing a board seat to in­flu­ence OpenAI. If the Open Phil­an­thropy Pro­ject thinks that Holden’s judg­ment is good enough that he should be in con­trol, why only here? If he thinks that other Open Phil­an­thropy Pro­ject AI safety grantees have good judg­ment but OpenAI doesn’t, why not give them sim­ilar amounts of money free of strings to spend at their dis­cre­tion and see what hap­pens? Why not buy people like Eliezer Yudkowsky, Nick Bostrom, or Stu­art Rus­sell a seat on OpenAI’s board?

On the other hand, the Open Phil­an­thropy Pro­ject is right on the mer­its here with re­spect to safe su­per­in­tel­li­gence de­vel­op­ment. Open­ness makes sense for weak AI, but if you’re build­ing true strong AI you want to make sure you’re co­oper­at­ing with all the other teams in a single closed ef­fort. I agree with the Open Phil­an­thropy Pro­ject’s as­sess­ment of the rel­ev­ant risks. But it’s not clear to me how of­ten join­ing the bad guys to pre­vent their worst ex­cesses is a good strategy, and it seems like it has to of­ten be a mis­take. Still, I’m mind­ful of her­oes like John Rabe, Chi­une Su­gi­hara, and Os­car Schind­ler. And if I think someone has a good idea for im­prov­ing things, it makes sense to real­loc­ate con­trol from people who have worse ideas, even if there’s some po­ten­tial bet­ter al­loc­a­tion.

On the other hand, is Holden Karnof­sky the right per­son to do this? The case is mixed.

He listens to and en­gages with the ar­gu­ments from prin­cipled ad­voc­ates for AI safety re­search, such as Nick Bostrom, Eliezer Yudkowsky, and Stu­art Rus­sell. This is a point in his fa­vor. But, I can think of other people who en­gage with such ar­gu­ments. For in­stance, OpenAI founder Elon Musk has pub­licly praised Bostrom’s book Su­per­in­tel­li­gence, and founder Sam Alt­man has writ­ten two blog posts sum­mar­iz­ing con­cerns about AI safety reas­on­ably co­gently. Alt­man even asked Luke Muehl­hauser, former ex­ec­ut­ive dir­ector of MIRI, for feed­back pre-pub­lic­a­tion. He’s met with Nick Bostrom. That sug­gests a sub­stan­tial level of dir­ect en­gage­ment with the field, al­though Holden has en­gaged for a longer time, more ex­tens­ively, and more dir­ectly.

Another point in Holden’s fa­vor, from my per­spect­ive, is that un­der his lead­er­ship, the Open Phil­an­thropy Pro­ject has fun­ded the most ser­i­ous-seem­ing pro­grams for both weak and strong AI safety re­search. But Musk also man­aged to (in­dir­ectly) fund AI safety re­search at MIRI and by Nick Bostrom per­son­ally, via his $10 mil­lion FLI grant.

The Open Phil­an­thropy Pro­ject also says that it ex­pects to learn a lot about AI re­search from this, which will help it make bet­ter de­cisions on AI risk in the fu­ture and in­flu­ence the field in the right way. This is reas­on­able as far as it goes. But re­mem­ber that the case for po­s­i­tion­ing the Open Phil­an­thropy Pro­ject to do this re­lies on the as­sump­tion that the Open Phil­an­thropy Pro­ject will im­prove mat­ters by be­com­ing a cent­ral in­flu­en­cer in this field. This move is con­sist­ent with reach­ing that goal, but it is not in­de­pend­ent evid­ence that the goal is the right one.

Over­all, there are good nar­row reas­ons to think that this is a po­ten­tial im­prove­ment over the prior situ­ation around OpenAI – but only a small and ill-defined im­prove­ment, at con­sid­er­able at­ten­tional cost, and with the off­set­ting po­ten­tial harm of in­creas­ing OpenAI’s per­ceived le­git­im­acy as a long-run AI safety or­gan­iz­a­tion.

And it’s wor­ry­ing that Open Phil­an­thropy Pro­ject’s largest grant – not just for AI risk, but ever (aside from GiveWell Top Char­ity fund­ing) – is be­ing made to an or­gan­iz­a­tion at which Holden’s house­mate and fu­ture brother-in-law is a lead­ing re­searcher. The nepot­ism ar­gu­ment is not my cent­ral ob­jec­tion. If I oth­er­wise thought the grant were ob­vi­ously a good idea, it wouldn’t worry me, be­cause it’s nat­ural for people with shared val­ues and out­looks to be­come close non­pro­fes­sion­ally as well. But in the ab­sence of a clear com­pel­ling spe­cific case for the grant, it’s wor­ry­ing.

Al­to­gether, I’m not say­ing this is an un­reas­on­able shift, con­sidered in isol­a­tion. I’m not even sure this is a bad thing for the Open Phil­an­thropy Pro­ject to be do­ing – in­siders may have in­form­a­tion that I don’t, and that is dif­fi­cult to com­mu­nic­ate to out­siders. But as out­siders, there comes a point when someone’s maxed out their moral credit, and we should wait for res­ults be­fore act­ively try­ing to en­trust the Open Phil­an­thropy Pro­ject and its staff with more re­spons­ib­il­ity.

EA Funds and self-recommendation

The Centre for Ef­fect­ive Al­tru­ism is act­ively try­ing to en­trust the Open Phil­an­thropy Pro­ject and its staff with more re­spons­ib­il­ity.

The con­cerns of CEA’s CEO Wil­liam MacAskill about GiveWell have, as far as I can tell, never been ad­dressed, and the un­der­ly­ing is­sues have only be­come more acute. But CEA is now work­ing to put more money un­der the con­trol of Open Phil­an­thropy Pro­ject staff, through its new EA Funds product – a way for sup­port­ers to del­eg­ate giv­ing de­cisions to ex­pert EA “fund man­agers” by giv­ing to one of four funds: Global Health and Devel­op­ment, An­imal Wel­fare, Long-Term Fu­ture, and Ef­fect­ive Al­tru­ism Com­munity.

The Ef­fect­ive Al­tru­ism move­ment began by say­ing that be­cause very poor people ex­ist, we should real­loc­ate money from or­din­ary people in the de­veloped world to the global poor. Now the pitch is in ef­fect that be­cause very poor people ex­ist, we should real­loc­ate money from or­din­ary people in the de­veloped world to the ex­tremely wealthy. This is a strange and sur­pris­ing place to end up, and it’s worth re­tra­cing our steps. Again, I find it easi­est to think of three stages:

  1. Money can go much farther in the de­vel­op­ing world. Here, we’ve found some ex­amples for you. As a res­ult, you can do a huge amount of good by giv­ing away a large share of your in­come, so you ought to.

  2. We’ve found ways for you to do a huge amount of good by giv­ing away a large share of your in­come for de­vel­op­ing-world in­ter­ven­tions, so you ought to trust our re­com­mend­a­tions. You ought to give a large share of your in­come to these weird things our friends are do­ing that are even bet­ter, or join our friends.

  3. We’ve found ways for you to do a huge amount of good by fund­ing weird things our friends are do­ing, so you ought to trust the people we trust. You ought to give a large share of your in­come to a multi-bil­lion-dol­lar found­a­tion that funds such things.

Stage 1: The dir­ect pitch

At first, Giv­ing What We Can (the or­gan­iz­a­tion that even­tu­ally be­came CEA) had a simple, easy to un­der­stand pitch:

Giv­ing What We Can is the brainchild of Toby Ord, a philo­sopher at Bal­liol Col­lege, Ox­ford. In­spired by the ideas of eth­i­cists Peter Singer and Tho­mas Pogge, Toby de­cided in 2009 to com­mit a large pro­por­tion of his in­come to char­it­ies that ef­fect­ively al­le­vi­ate poverty in the de­vel­op­ing world.

[…]

Dis­cov­er­ing that many of his friends and col­leagues were in­ter­ested in mak­ing a sim­ilar pledge, Toby worked with fel­low Ox­ford philo­sopher Will MacAskill to cre­ate an in­ter­na­tional or­gan­iz­a­tion of people who would donate a sig­ni­fic­ant pro­por­tion of their in­come to cost-ef­fect­ive char­it­ies.

Giv­ing What We Can launched in Novem­ber 2009, at­tract­ing sig­ni­fic­ant me­dia at­ten­tion. Within a year, 64 people had joined the so­ci­ety, their pledged dona­tions amount­ing to $21 mil­lion. Ini­tially run on a vo­lun­teer basis, Giv­ing What We Can took on full-time staff in the sum­mer of 2012.

In ef­fect, its ar­gu­ment was: “Look, you can do huge amounts of good by giv­ing to people in the de­vel­op­ing world. Here are some ex­amples of char­it­ies that do that. It seems like a great idea to give 10% of our in­come to those char­it­ies.”

GWWC was a simple product, with a clear, lim­ited scope. Its founders be­lieved that people, in­clud­ing them, ought to do a thing – so they ar­gued dir­ectly for that thing, us­ing the ar­gu­ments that had per­suaded them. If it wasn’t for you, it was easy to fig­ure that out; but a sur­pris­ingly large num­ber of people were per­suaded by a simple, dir­ect state­ment of the ar­gu­ment, took the pledge, and gave a lot of money to char­it­ies help­ing the world’s poorest.

Stage 2: Rhet­oric and be­lief diverge

Then, GWWC staff were per­suaded you could do even more good with your money in areas other than de­vel­op­ing-world char­ity, such as ex­ist­en­tial risk mit­ig­a­tion. En­cour­aging dona­tions and work in these areas be­came part of the broader Ef­fect­ive Al­tru­ism move­ment, and GWWC’s um­brella or­gan­iz­a­tion was named the Centre for Ef­fect­ive Al­tru­ism. So far, so good.

But this left Ef­fect­ive Al­tru­ism in an awk­ward po­s­i­tion; while lead­er­ship of­ten per­son­ally be­lieve the most ef­fect­ive way to do good is far-fu­ture stuff or sim­il­arly weird-sound­ing things, many people who can see the mer­its of the de­vel­op­ing-world char­ity ar­gu­ment re­ject the ar­gu­ment that be­cause the vast ma­jor­ity of people live in the far fu­ture, even a very small im­prove­ment in hu­man­ity’s long-run pro­spects out­weighs huge im­prove­ments on the global poverty front. They also of­ten re­ject sim­ilar scope-sens­it­ive ar­gu­ments for things like an­imal char­it­ies.

Giv­ing What We Can’s page on what we can achieve still fo­cuses on global poverty, be­cause de­vel­op­ing-world char­ity is easier to ex­plain per­suas­ively. However, EA lead­er­ship tends to privately fo­cus on things like AI risk. Two years ago many at­tendees at the EA Global con­fer­ence in the San Fran­cisco Bay Area were sur­prised that the con­fer­ence fo­cused so heav­ily on AI risk, rather than the global poverty in­ter­ven­tions they’d ex­pec­ted.

Stage 3: Ef­fect­ive al­tru­ism is self-recommending

Shortly be­fore the launch of the EA Funds I was told in in­formal con­ver­sa­tions that they were a re­sponse to de­mand. Giv­ing What We Can pledge-takers and other EA donors had told CEA that they trus­ted it to GWWC pledge-taker de­mand. CEA was re­spond­ing by cre­at­ing a product for the people who wanted it.

This seemed pretty reas­on­able to me, and on the whole good. If someone wants to trust you with their money, and you think you can do some­thing good with it, you might as well take it, be­cause they’re es­tim­at­ing your skill above theirs. But not every­one agrees, and as the Madoff case demon­strates, “people are beg­ging me to take their money” is not a defin­it­ive ar­gu­ment that you are do­ing any­thing real.

In prac­tice, the funds are man­aged by Open Phil­an­thropy Pro­ject staff:

We want to keep this idea as simple as pos­sible to be­gin with, so we’ll have just four funds, with the fol­low­ing man­agers:

  • Global Health and Devel­op­ment—Elie Hassenfeld

  • An­imal Wel­fare – Lewis Bollard

  • Long-run fu­ture – Nick Beckstead

  • Move­ment-build­ing – Nick Beckstead

(Note that the meta-char­ity fund will be able to fund CEA; and note that Nick Beck­stead is a Trustee of CEA. The long-run fu­ture fund and the meta-char­ity fund con­tinue the work that Nick has been do­ing run­ning the EA Giv­ing Fund.)

It’s not a co­in­cid­ence that all the fund man­agers work for GiveWell or Open Phil­an­thropy. First, these are the or­gan­isa­tions whose char­ity eval­u­ation we re­spect the most. The worst-case scen­ario, where your dona­tion just adds to the Open Phil­an­thropy fund­ing within a par­tic­u­lar area, is there­fore still a great out­come. Se­cond, they have the best in­form­a­tion avail­able about what grants Open Phil­an­thropy are plan­ning to make, so have a good un­der­stand­ing of where the re­main­ing fund­ing gaps are, in case they feel they can use the money in the EA Fund to fill a gap that they feel is im­port­ant, but isn’t cur­rently ad­dressed by Open Phil­an­thropy.

In past years, Giv­ing What We Can re­com­mend­a­tions have largely over­lapped with GiveWell’s top char­it­ies.

In the com­ments on the launch an­nounce­ment on the EA Forum, sev­eral people (in­clud­ing me) poin­ted out that the Open Phil­an­thropy Pro­ject seems to be hav­ing trouble giv­ing away even the money it already has, so it seems odd to dir­ect more money to Open Phil­an­thropy Pro­ject de­cision­makers. CEA’s senior mar­ket­ing man­ager replied that the Funds were a min­imum vi­able product to test the concept:

I don’t think the long-term goal is that OpenPhil pro­gram of­ficers are the only fund man­agers. Work­ing with them was the best way to get an MVP ver­sion in place.

This also seemed okay to me, and I said so at the time.

[NOTE: I’ve ed­ited the next para­graph to ex­cise some un­re­li­able in­form­a­tion. Sorry for the er­ror, and thanks to Rob Wib­lin for point­ing it out.]

After they were launched, though, I saw phras­ings that were not so cau­tious at all, in­stead mak­ing claims that this was gen­er­ally a bet­ter way to give. As of writ­ing this, if someone on the ef­fect­iveal­tru­ism.org web­site clicks on “Don­ate Ef­fect­ively” they will be led dir­ectly to a page pro­mot­ing EA Funds. When I looked at Giv­ing What We Can’s top char­it­ies page in early April, it re­com­men­ded the EA Funds “as the highest im­pact op­tion for donors.”

This is not a re­sponse to de­mand, it is an at­tempt to cre­ate de­mand by us­ing CEA’s au­thor­ity, telling people that the funds are bet­ter than what they’re do­ing already. By con­trast, GiveWell’s Top Char­it­ies page simply says:

Our top char­it­ies are evid­ence-backed, thor­oughly vet­ted, un­der­fun­ded or­gan­iz­a­tions.

This care­fully avoids any overt claim that they’re the highest-im­pact op­tion avail­able to donors. GiveWell avoids say­ing that be­cause there’s no way they could know it, so say­ing it wouldn’t be truth­ful.

A mar­ket­ing email might have just been dashed off quickly, and an ex­ag­ger­ated word­ing might just have been an over­sight. But when I looked at Giv­ing What We Can’s top char­it­ies page in early April, it re­com­men­ded the EA Funds “as the highest im­pact op­tion for donors.”

The word­ing has since been qual­i­fied with “for most donors”, which is a good change. But the thing I’m wor­ried about isn’t just the ex­pli­cit ex­ag­ger­ated claims – it’s the un­der­ly­ing mar­ket­ing mind­set that made them seem like a good idea in the first place. EA seems to have switched from an en­dorse­ment of the best things out­side it­self, to an en­dorse­ment of it­self. And it’s con­cen­trat­ing de­cision­mak­ing power in the Open Phil­an­thropy Pro­ject.

Ef­fect­ive al­tru­ism is over­ex­ten­ded, but it doesn’t have to be

There is a say­ing in fin­ance, that was old even back when Keynes said it. If you owe the bank a mil­lion dol­lars, then you have a prob­lem. If you owe the bank a bil­lion dol­lars, then the bank has a prob­lem.

In other words, if someone ex­tends you a level of trust they could sur­vive writ­ing off, then they might call in that loan. As a res­ult, they have lever­age over you. But if they over­ex­tend, put­ting all their eggs in one bas­ket, and you are that bas­ket, then you have lever­age over them; you’re too big to fail. Let­ting you fail would be so dis­astrous for their in­terests that you can ex­tract nearly ar­bit­rary con­ces­sions from them, in­clud­ing fur­ther in­vest­ment. For this reason, suc­cess­ful in­sti­tu­tions of­ten try to di­ver­sify their in­vest­ments, and avoid over­ex­tend­ing them­selves. Regu­lat­ors, for the same reason, try to pre­vent banks from be­com­ing “too big to fail.”

The Ef­fect­ive Al­tru­ism move­ment is con­cen­trat­ing de­cision­mak­ing power and trust as much as pos­sible, in a way that’s set­ting it­self up to in­vest ever in­creas­ing amounts of con­fid­ence to keep the game go­ing.

The al­tern­at­ive is to keep the scope of each or­gan­iz­a­tion nar­row, overtly ask for trust for each ven­ture sep­ar­ately, and make it clear what sorts of pro­grams are be­ing fun­ded. For in­stance, Giv­ing What We Can should go back to its ini­tial fo­cus of global poverty re­lief.

Like many EA lead­ers, I hap­pen to be­lieve that any­thing you can do to steer the far fu­ture in a bet­ter dir­ec­tion is much, much more con­sequen­tial for the well-be­ing of sen­tient creatures than any purely short-run im­prove­ment you can cre­ate now. So it might seem odd that I think Giv­ing What We Can should stay fo­cused on global poverty. But, I be­lieve that the single most im­port­ant thing we can do to im­prove the far fu­ture is hold onto our abil­ity to ac­cur­ately build shared mod­els. If we use bait-and-switch tac­tics, we are act­ively erod­ing the most im­port­ant type of cap­ital we have – co­ordin­a­tion ca­pa­city.

If you do not think giv­ing 10% of one’s in­come to global poverty char­it­ies is the right thing to do, then you can’t in full in­teg­rity urge oth­ers to do it – so you should stop. You might still be­lieve that GWWC ought to ex­ist. You might still be­lieve that it is a pos­it­ive good to en­cour­age people to give much of their in­come to help the global poor, if they wouldn’t have been do­ing any­thing else es­pe­cially ef­fect­ive with the money. If so, and you hap­pen to find your­self in charge of an or­gan­iz­a­tion like Giv­ing What We Can, the thing to do is write a let­ter to GWWC mem­bers telling them that you’ve changed your mind, and why, and of­fer­ing to give away the brand to who­ever seems best able to hon­estly main­tain it.

If someone at the Centre for Ef­fect­ive Al­tru­ism fully be­lieves in GWWC’s ori­ginal mis­sion, then that might make the trans­ition easier. If not, then one still has to tell the truth and do what’s right.

And what of the EA Funds? The Long-Term Fu­ture Fund is run by Open Phil­an­thropy Pro­ject Pro­gram Of­ficer Nick Beck­stead. If you think that it’s a good thing to del­eg­ate giv­ing de­cisions to Nick, then I would agree with you. Nick’s a great guy! I’m al­ways happy to see him when he shows up at house parties. He’s smart, and he act­ively seeks out ar­gu­ments against his cur­rent point of view. But the right thing to do, if you want to per­suade people to del­eg­ate their giv­ing de­cisions to Nick Beck­stead, is to make a prin­cipled case for del­eg­at­ing giv­ing de­cisions to Nick Beck­stead. If the Centre for Ef­fect­ive Al­tru­ism did that, then Nick would al­most cer­tainly feel more free to al­loc­ate funds to the best things he knows about, not just the best things he sus­pects EA Funds donors would be able to un­der­stand and agree with.

If you can’t dir­ectly per­suade people, then maybe you’re wrong. If the prob­lem is in­fer­en­tial dis­tance, then you’ve got some work to do bridging that gap.

There’s noth­ing wrong with set­ting up a fund to make it easy. It’s ac­tu­ally a really good idea. But there is some­thing wrong with the mul­tiple lay­ers of vague in­dir­ec­tion in­volved in the cur­rent mar­ket­ing of the Far Fu­ture fund – us­ing global poverty to sell the gen­eric idea of do­ing the most good, then us­ing CEA’s iden­tity as the or­gan­iz­a­tion in charge of do­ing the most good to per­suade people to del­eg­ate their giv­ing de­cisions to it, and then send­ing their money to some dude at the multi-bil­lion-dol­lar found­a­tion to give away at his per­sonal dis­cre­tion. The same ar­gu­ment ap­plies to all four Funds.

Like­wise, if you think that work­ing dir­ectly on AI risk is the most im­port­ant thing, then you should make ar­gu­ments dir­ectly for work­ing on AI risk. If you can’t dir­ectly per­suade people, then maybe you’re wrong. If the prob­lem is in­fer­en­tial dis­tance, it might make sense to im­it­ate the ex­ample of someone like Eliezer Yudkowsky, who used in­dir­ect meth­ods to bridge the in­fer­en­tial gap by writ­ing ex­tens­ively on in­di­vidual hu­man ra­tion­al­ity, and did not try to con­trol oth­ers’ ac­tions in the mean­time.

If Holden thinks he should be in charge of some AI safety re­search, then he should ask Good Ven­tures for funds to ac­tu­ally start an AI safety re­search or­gan­iz­a­tion. I’d be ex­cited to see what he’d come up with if he had full con­trol of and re­spons­ib­il­ity for such an or­gan­iz­a­tion. But I don’t think any­one has a good plan to work dir­ectly on AI risk, and I don’t have one either, which is why I’m not dir­ectly work­ing on it or fund­ing it. My plan for im­prov­ing the far fu­ture is to build hu­man co­ordin­a­tion ca­pa­city.

(If, by con­trast, Holden just thinks there needs to be co­ordin­a­tion between dif­fer­ent AI safety or­gan­iz­a­tions, the ob­vi­ous thing to do would be to work with FLI on that, e.g. by giv­ing them enough money to throw their weight around as a fun­der. They or­gan­ized the suc­cess­ful Puerto Rico con­fer­ence, after all.)

Another thing that would be en­cour­aging would be if at least one of the Funds were not ad­min­istered en­tirely by an Open Phil­an­thropy Pro­ject staffer, and ideally an ex­pert who doesn’t be­ne­fit from the halo of “be­ing an EA.” For in­stance, Chris Blattman is a de­vel­op­ment eco­nom­ist with ex­per­i­ence design­ing pro­grams that don’t just use but gen­er­ate evid­ence on what works. When people were ar­guing about whether sweat­shops are good or bad for the global poor, he ac­tu­ally went and looked by per­form­ing a ran­dom­ized con­trolled trial. He’s lead­ing two new ini­ti­at­ives with J-PAL and IPA, and ex­pects that dir­ect­ors design­ing stud­ies will also have to spend time fun­drais­ing. Hav­ing fund­ing lined up seems like the sort of thing that would let them spend more time ac­tu­ally run­ning pro­grams. And more gen­er­ally, he seems likely to know about fund­ing op­por­tun­it­ies the Open Phil­an­thropy Pro­ject doesn’t, simply be­cause he’s em­bed­ded in a slightly dif­fer­ent part of the global health and de­vel­op­ment net­work.

Nar­rower pro­jects that rely less on the EA brand and more on what they’re ac­tu­ally do­ing, and more co­oper­a­tion on equal terms with out­siders who seem to be do­ing some­thing good already, would do a lot to help EA grow bey­ond put­ting stick­ers on its own be­ha­vior chart. I’d like to see EA grow up. I’d be ex­cited to see what it might do.

Summary

  1. Good pro­grams don’t need to dis­tort the story people tell about them, while bad pro­grams do.

  2. Moral con­fid­ence games – treat­ing past prom­ises and trust as a track re­cord to jus­tify more trust – are an ex­ample of the kind of dis­tor­tion men­tioned in (1), that be­ne­fits bad pro­grams more than good ones.

  3. The Open Phil­an­thropy Pro­ject’s Open AI grant rep­res­ents a shift from eval­u­at­ing other pro­grams’ ef­fect­ive­ness, to as­sum­ing its own ef­fect­ive­ness.

  4. EA Funds rep­res­ents a shift from EA eval­u­at­ing pro­grams’ ef­fect­ive­ness, to as­sum­ing EA’s ef­fect­ive­ness.

  5. A shift from eval­u­at­ing other pro­grams’ ef­fect­ive­ness, to as­sum­ing one’s own ef­fect­ive­ness, is an ex­ample of the kind of “moral con­fid­ence game” men­tioned in (2).

  6. EA ought to fo­cus on scope-lim­ited pro­jects, so that it can dir­ectly make the case for those par­tic­u­lar pro­jects in­stead of re­ly­ing on EA iden­tity as a reason to sup­port an EA or­gan­iz­a­tion.

  7. EA or­gan­iz­a­tions ought to en­trust more re­spons­ib­il­ity to out­siders who seem to be do­ing good things but don’t overtly identify as EA, in­stead of try­ing to keep it all in the fam­ily.

(Cross-pos­ted at my per­sonal blog and the EA Forum.
Dis­clos­ure: I know many people in­volved at many of the or­gan­iz­a­tions dis­cussed, and I used to work for GiveWell. I have no cur­rent in­sti­tu­tional af­fil­i­ation to any of them. Every­one men­tioned has al­ways been nice to me and I have no per­sonal com­plaints.)