Effective altruism is self-recommending

A par­ent I know re­ports (some de­tails anonymized):

Re­cently we bought my 3-year-old daugh­ter a “be­hav­ior chart,” in which she can earn stick­ers for achieve­ments like not throw­ing tantrums, eat­ing fruits and veg­eta­bles, and go­ing to sleep on time. We suc­cess­fully im­pressed on her that a ma­jor goal each day was to earn as many stick­ers as pos­si­ble.

This morn­ing, though, I found her just plas­ter­ing her en­tire be­hav­ior chart with stick­ers. She gen­uinely seemed to think I’d be proud of how many stick­ers she now had.

The Effec­tive Altru­ism move­ment has now en­tered this ex­tremely cute stage of cog­ni­tive de­vel­op­ment. EA is more than three years old, but in­sti­tu­tions age differ­ently than in­di­vi­d­u­als.

What is a con­fi­dence game?

In 2009, in­vest­ment man­ager and con artist Bernie Mad­off pled guilty to run­ning a mas­sive fraud, with $50 billion in fake re­turn on in­vest­ment, hav­ing out­right em­bez­zled around $18 billion out of the $36 billion in­vestors put into the fund. Only a cou­ple of years ear­lier, when my grand­father was still al­ive, I re­mem­ber him tel­ling me about how Mad­off was a ge­nius, get­ting his in­vestors a con­sis­tent high re­turn, and about how he wished he could be in on it, but Mad­off wasn’t ac­cept­ing ad­di­tional in­vestors.

What Mad­off was run­ning was a clas­sic Ponzi scheme. In­vestors gave him money, and he told them that he’d got­ten them an ex­cep­tion­ally high re­turn on in­vest­ment, when in fact he had not. But be­cause he promised to be able to do it again, his in­vestors mostly rein­vested their money, and more peo­ple were ex­cited about get­ting in on the deal. There was more than enough money to cover the few peo­ple who wanted to take money out of this amaz­ing op­por­tu­nity.

Ponzi schemes, pyra­mid schemes, and spec­u­la­tive bub­bles are all situ­a­tions in in­vestors’ ex­pected prof­its are paid out from the money paid in by new in­vestors, in­stead of any in­de­pen­dently prof­itable ven­ture. Ponzi schemes are cen­trally man­aged – the per­son run­ning the scheme rep­re­sents it to in­vestors as le­gi­t­i­mate, and takes re­spon­si­bil­ity for find­ing new in­vestors and pay­ing off old ones. In pyra­mid schemes such as multi-level-mar­ket­ing and chain let­ters, each gen­er­a­tion of in­vestor re­cruits new in­vestors and prof­its from them. In spec­u­la­tive bub­bles, there is no for­mal struc­ture prop­ping up the scheme, only a com­mon, mu­tu­ally re­in­forc­ing set of ex­pec­ta­tions among spec­u­la­tors driv­ing up the price of some­thing that was already for sale.

The gen­eral situ­a­tion in which some­one sets them­self up as the repos­i­tory of oth­ers’ con­fi­dence, and uses this as lev­er­age to ac­quire in­creas­ing in­vest­ment, can be called a con­fi­dence game.

Some of the most iconic Ponzi schemes blew up quickly be­cause they promised wildly un­re­al­is­tic growth rates. This had three un­de­sir­able effects for the peo­ple run­ning the schemes. First, it at­tracted too much at­ten­tion – too many peo­ple wanted into the scheme too quickly, so they rapidly ex­hausted sources of new cap­i­tal. Se­cond, be­cause their rates of re­turn were im­plau­si­bly high, they made them­selves tar­gets for scrutiny. Third, the ex­tremely high rates of re­turn them­selves caused their promises to quickly out­pace what they could plau­si­bly re­turn to even a small share of their in­vestor vic­tims.

Mad­off was care­ful to avoid all these prob­lems, which is why his scheme lasted for nearly half a cen­tury. He only promised plau­si­bly high re­turns (around 10% an­nu­ally) for a suc­cess­ful hedge fund, es­pe­cially if it was ille­gally en­gaged in in­sider trad­ing, rather than the sort of im­plau­si­bly high re­turns typ­i­cal of more blatant Ponzi schemes. (Charles Ponzi promised to dou­ble in­vestors’ money in 90 days.) Mad­off showed re­luc­tance to ac­cept new clients, like any other fund man­ager who doesn’t want to get too big for their trad­ing strat­egy.

He didn’t plas­ter stick­ers all over his be­hav­ior chart – he put a rea­son­able num­ber of stick­ers on it. He played a long game.

Not all con­fi­dence games are in­her­ently bad. For in­stance, the US na­tional pen­sion sys­tem, So­cial Se­cu­rity, op­er­ates as a kind of Ponzi scheme, it is not ob­vi­ously un­sus­tain­able, and many peo­ple con­tinue to be glad that it ex­ists. Nom­i­nally, when peo­ple pay So­cial Se­cu­rity taxes, the money is in­vested in the so­cial se­cu­rity trust fund, which holds in­ter­est-bear­ing fi­nan­cial as­sets that will be used to pay out benefits in their old age. In this re­spect it looks like an or­di­nary pen­sion fund.

How­ever, the fi­nan­cial as­sets are US Trea­sury bonds. There is no in­de­pen­dently prof­itable ven­ture. The Fed­eral Govern­ment of the United States of Amer­ica is quite liter­ally writ­ing an IOU to it­self, and then spend­ing the money on cur­rent ex­pen­di­tures, in­clud­ing pay­ing out cur­rent So­cial Se­cu­rity benefits.

The Fed­eral Govern­ment, of course, can write as large an IOU to it­self as it wants. It could make all tax rev­enues part of the So­cial Se­cu­rity pro­gram. It could is­sue new Trea­sury bonds and gift them to So­cial Se­cu­rity. None of this would in­crease its abil­ity to pay out So­cial Se­cu­rity benefits. It would be an empty ex­er­cise in putting stick­ers on its own chart.

If the Fed­eral gov­ern­ment loses the abil­ity to col­lect enough taxes to pay out so­cial se­cu­rity benefits, there is no ad­di­tional ca­pac­ity to pay rep­re­sented by US Trea­sury bonds. What we have is an im­plied promise to pay out fu­ture benefits, backed by the ex­pec­ta­tion that the gov­ern­ment will be able to col­lect taxes in the fu­ture, in­clud­ing So­cial Se­cu­rity taxes.

There’s noth­ing nec­es­sar­ily wrong with this, ex­cept that the mechanism by which So­cial Se­cu­rity is funded is ob­scured by fi­nan­cial en­g­ineer­ing. How­ever, this mis­di­rec­tion should raise at least some doubts as to the un­der­ly­ing sus­tain­abil­ity or de­sir­a­bil­ity of the com­mit­ment. In fact, this scheme was adopted speci­fi­cally to give peo­ple the im­pres­sion that they had some sort of prop­erty rights over their so­cial Se­cu­rity Pen­sion, in or­der to make the pro­gram poli­ti­cally difficult to elimi­nate. Once peo­ple have “bought in” to a pro­gram, they will be re­luc­tant to treat their prior con­tri­bu­tions as sunk costs, and will­ing to in­vest ad­di­tional re­sources to sal­vage their in­vest­ment, in ways that may make them in­creas­ingly re­li­ant on it.

Not all con­fi­dence games are in­trin­si­cally bad, but du­bi­ous pro­grams benefit the most from be­ing set up as con­fi­dence games. More gen­er­ally, bad pro­grams are the ones that benefit the most from be­ing al­lowed to fid­dle with their own ac­count­ing. As Daniel Davies writes, in The D-Squared Digest One Minute MBA—Avoid­ing Pro­jects Pur­sued By Morons 101:

Good ideas do not need lots of lies told about them in or­der to gain pub­lic ac­cep­tance. I was first made aware of this dur­ing an ac­count­ing class. We were dis­cussing the sub­ject of ac­count­ing for stock op­tions at tech­nol­ogy com­pa­nies. […] One side (mainly tech­nol­ogy com­pa­nies and their lob­by­ists) held that stock op­tion grants should not be treated as an ex­pense on pub­lic policy grounds; treat­ing them as an ex­pense would dis­cour­age com­pa­nies from grant­ing them, and stock op­tions were a vi­tal com­pen­sa­tion tool that in­cen­tivised perfor­mance, re­warded dy­namism and in­no­va­tion and cre­ated vast amounts of value for Amer­ica and the world. The other side (mainly peo­ple like War­ren Buffet) held that stock op­tions looked awfully like a mas­sive blag car­ried out my man­age­ment at the ex­pense of share­hold­ers, and that the proper place to record such blags was the P&L ac­count.

Our lec­turer, in sum­ming up the de­bate, made the not un­rea­son­able point that if stock op­tions re­ally were a fan­tas­tic tool which un­leashed the cre­ative power in ev­ery em­ployee, ev­ery­one would want to ex­pense as many of them as pos­si­ble, the bet­ter to boast about how in­no­va­tive, em­pow­ered and fan­tas­tic they were. Since the tech com­pa­nies’ point of view ap­peared to be that if they were ever forced to ac­count hon­estly for their op­tion grants, they would quickly stop mak­ing them, this offered de­cent prima fa­cie ev­i­dence that they weren’t, re­ally, all that fan­tas­tic.

How­ever, I want to gen­er­al­ize the con­cept of con­fi­dence games from the do­main of fi­nan­cial cur­rency, to the do­main of so­cial credit more gen­er­ally (of which money is a par­tic­u­lar form that our so­ciety com­monly uses), and in par­tic­u­lar I want to talk about con­fi­dence games in the cur­rency of credit for achieve­ment.

If I were ap­ply­ing for a very im­por­tant job with great re­spon­si­bil­ities, such as Pres­i­dent of the United States, CEO of a top cor­po­ra­tion, or head or board mem­ber of a ma­jor AI re­search in­sti­tu­tion, I could be ex­pected to have some rele­vant prior ex­pe­rience. For in­stance, I might have had some suc­cess man­ag­ing a similar, smaller in­sti­tu­tion, or serv­ing the same in­sti­tu­tion in a lesser ca­pac­ity. More gen­er­ally, when I make a bid for con­trol over some­thing, I am im­plic­itly claiming that I have enough so­cial credit – enough of a track record – that I can be ex­pected to do good things with that con­trol.

In gen­eral, if some­one has done a lot, we should ex­pect to see an ice­berg pat­tern where a small eas­ily-visi­ble part sug­gests a lot of solid but harder-to-ver­ify sub­stance un­der the sur­face. One might be tempted to make a habit of im­put­ing a much larger ice­berg from the com­bi­na­tion of a small floaty bit, and promises. But, a small eas­ily-visi­ble part with claims of a lot of harder-to-see sub­stance is easy to mimic with­out ac­tu­ally do­ing the work. As Davies con­tinues:

The Vi­tal Im­por­tance of Au­dit. Em­pha­sised over and over again. Brealey and My­ers has a sec­tion on this, in which they re­mind cal­low stu­dents that like back­ing-up one’s com­puter files, this is a les­son that ev­ery­one seems to have to learn the hard way. Ba­si­cally, it’s been shown time and again and again; com­pa­nies which do not au­dit com­pleted pro­jects in or­der to see how ac­cu­rate the origi­nal pro­jec­tions were, tend to get ex­actly the fore­casts and pro­jects that they de­serve. Com­pa­nies which have a cul­ture where there are no con­se­quences for mak­ing dishon­est fore­casts, get the pro­jects they de­serve. Com­pa­nies which al­lo­cate blank cheques to man­age­ment teams with a proven record of failure and men­dac­ity, get what they de­serve.

If you can in­de­pen­dently put stick­ers on your own chart, then your chart is no longer re­li­ably track­ing some­thing ex­ter­nally ver­ified. If fore­casts are not checked and tracked, or fore­cast­ers are not con­se­quently held ac­countable for their fore­casts, then there is no rea­son to be­lieve that as­sess­ments of fu­ture, on­go­ing, or past pro­grams are ac­cu­rate. Adopt­ing a wait-and-see at­ti­tude, in­sist­ing on au­dits for ac­tual re­sults (not just pre­dic­tions) be­fore in­vest­ing more, will definitely slow down fund­ing for good pro­grams. But with­out it, most of your fund­ing will go to worth­less ones.

Open Philan­thropy, OpenAI, and closed val­i­da­tion loops

The Open Philan­thropy Pro­ject re­cently an­nounced a $30 mil­lion grant to the $1 billion non­profit AI re­search or­ga­ni­za­tion OpenAI. This is the largest sin­gle grant it has ever made. The main point of the grant is to buy in­fluence over OpenAI’s fu­ture pri­ori­ties; Holden Karnofsky, Ex­ec­u­tive Direc­tor of the Open Philan­thropy Pro­ject, is get­ting a seat on OpenAI’s board as part of the deal. This marks the sec­ond ma­jor shift in fo­cus for the Open Philan­thropy Pro­ject.

The first shift (back when it was just called GiveWell) was from try­ing to find the best already-ex­ist­ing pro­grams to fund (“pas­sive fund­ing”) to en­vi­sion­ing new pro­grams and work­ing with grantees to make them re­al­ity (“ac­tive fund­ing”). The new shift is from fund­ing spe­cific pro­grams at all, to try­ing to take con­trol of pro­grams with­out any spe­cific plan.

To jus­tify the pas­sive fund­ing stage, all you have to be­lieve is that you can know bet­ter than other donors, among ex­ist­ing char­i­ties. For ac­tive fund­ing, you have to be­lieve that you’re smart enough to eval­u­ate po­ten­tial pro­grams, just like a char­ity founder might, and pick ones that will out­perform. But buy­ing con­trol im­plies that you think you’re so much bet­ter, that even be­fore you’ve eval­u­ated any pro­grams, if some­one’s do­ing some­thing big, you ought to have a say.

When GiveWell moved from a pas­sive to an ac­tive fund­ing strat­egy, it was rely­ing on the moral credit it had earned for its ex­ten­sive and well-re­garded char­ity eval­u­a­tions. The thing that was par­tic­u­larly ex­cit­ing about GiveWell was that they fo­cused on out­comes and effi­ciency. They didn’t just fo­cus on the size or in­ten­sity of the prob­lem a char­ity was ad­dress­ing. They didn’t just look at fi­nan­cial de­tails like over­head ra­tios. They asked the ques­tion a con­se­quen­tial­ist cares about: for a given ex­pen­di­ture of money, how much will this char­ity be able to im­prove out­comes?

How­ever, when GiveWell tracks its im­pact, it does not track ob­jec­tive out­comes at all. It tracks in­puts: at­ten­tion re­ceived (in the form of vis­its to its web­site) and money moved on the ba­sis of its recom­men­da­tions. In other words, its es­ti­mate of its own im­pact is based on the level of trust peo­ple have placed in it.

So, as GiveWell built out the Open Philan­thropy Pro­ject, its story was: We promised to do some­thing great. As a re­sult, we were en­trusted with a fair amount of at­ten­tion and money. There­fore, we should be given more re­spon­si­bil­ity. We rep­re­sented our be­hav­ior as praise­wor­thy, and as a re­sult peo­ple put stick­ers on our chart. For this rea­son, we should be ad­vanced stick­ers against fu­ture days of praise­wor­thy be­hav­ior.

Then, as the Open Philan­thropy Pro­ject ex­plored ac­tive fund­ing in more ar­eas, its es­ti­mate of its own effec­tive­ness grew. After all, it was fund­ing more spec­u­la­tive, hard-to-mea­sure pro­grams, but a multi-billion-dol­lar donor, which was largely rely­ing on the Open Philan­thropy Pro­ject’s opinions to as­sess effi­cacy (in­clud­ing its own effi­cacy), con­tinued to trust it.

What is miss­ing here is any ob­jec­tive track record of benefits. What this looks like to me, is a long sort of con­fi­dence game – or, us­ing less morally loaded lan­guage, a ven­ture with struc­tural re­li­ance on in­creas­ing amounts of lev­er­age – in the cur­rency of moral credit.

Ver­sion 0: GiveWell and pas­sive funding

First, there was GiveWell. GiveWell’s pur­pose was to find and vet ev­i­dence-backed char­i­ties. How­ever, it rec­og­nized that char­i­ties know their own busi­ness best. It wasn’t try­ing to do bet­ter than the char­i­ties; it was try­ing to do bet­ter than the typ­i­cal char­ity donor, by be­ing more dis­cern­ing.

GiveWell’s think­ing from this phase is ex­em­plified by co-founder Elie Hassen­feld’s Six tips for giv­ing like a pro:

When you give, give cash – no strings at­tached. You’re just a part-time donor, but the char­ity you’re sup­port­ing does this full-time and staff there prob­a­bly know a lot more about how to do their job than you do. If you’ve found a char­ity that you feel is ex­cel­lent – not just ac­cept­able – then it makes sense to trust the char­ity to make good de­ci­sions about how to spend your money.

GiveWell similarly tried to avoid dis­tort­ing char­i­ties’ be­hav­ior. Its job was only to eval­u­ate, not to in­terfere. To per­ceive, not to act. To find the best, and buy more of the same.

How did GiveWell as­sess its effec­tive­ness in this stage? When GiveWell eval­u­ates char­i­ties, it es­ti­mates their cost-effec­tive­ness in ad­vance. It as­sesses the pro­gram the char­ity is run­ning, through ex­per­i­men­tal ev­i­dence of the form of ran­dom­ized con­trol­led tri­als. GiveWell also au­dits the char­ity to make sure they’re ac­tu­ally run­ning the pro­gram, and figure out how much it costs as im­ple­mented. This is an ex­cel­lent, ev­i­dence-based way to gen­er­ate a pre­dic­tion of how much good will be done by mov­ing money to the char­ity.

As far as I can tell, these pre­dic­tions are untested.

One of GiveWell’s early top char­i­ties was VillageReach, which helped Mozam­bique with TB im­mu­niza­tion lo­gis­tics. GiveWell es­ti­mated that VillageReach could save a life for $1,000. But this char­ity is no longer recom­mended. The pub­lic page says:

VillageReach (www.villagereach.org) was our top-rated or­ga­ni­za­tion for 2009, 2010 and much of 2011 and it has re­ceived over $2 mil­lion due to GiveWell’s recom­men­da­tion. In late 2011, we re­moved VillageReach from our top-rated list be­cause we felt its pro­ject had limited room for more fund­ing. As of Novem­ber 2012, we be­lieve that that this pro­ject may have room for more fund­ing, but we still pre­fer our cur­rent high­est-rated char­i­ties above it.

GiveWell re­an­a­lyzed the data it based its recom­men­da­tions on, but hasn’t pub­lished an af­ter-the-fact ret­ro­spec­tive of long-run re­sults. I asked GiveWell about this by email. The re­sponse was that such an as­sess­ment was not pri­ori­tized be­cause GiveWell had found im­ple­men­ta­tion prob­lems in VillageReach’s scale-up work as well as rea­sons to doubt its origi­nal con­clu­sion about the im­pact of the pi­lot pro­gram. It’s un­clear to me whether this has caused GiveWell to eval­u­ate char­i­ties differ­ently in the fu­ture.

I don’t think some­one look­ing at GiveWell’s page on VillageReach would be likely to reach the con­clu­sion that GiveWell now be­lieves its origi­nal recom­men­da­tion was likely er­ro­neous. GiveWell’s im­pact page con­tinues to count money moved to VillageReach with­out any men­tion of the re­tracted recom­men­da­tion. If we as­sume that the point of track­ing money moved is to track the benefit of mov­ing money from worse to bet­ter uses, then re­pu­di­ated pro­grams ought to be counted against the to­tal, as costs, rather than to­wards it.

GiveWell has recom­mended the Against Malaria Foun­da­tion for the last sev­eral years as a top char­ity. AMF dis­tributes long-last­ing in­sec­ti­cide-treated bed nets to pre­vent mosquitos from trans­mit­ting malaria to hu­mans. Its eval­u­a­tion of AMF does not men­tion any di­rect ev­i­dence, pos­i­tive or nega­tive, about what hap­pened to malaria rates in the ar­eas where AMF op­er­ated. (There is a dis­cus­sion of the ev­i­dence that the bed nets were in fact de­liv­ered and used.) In the sup­ple­men­tary in­for­ma­tion page, how­ever, we are told:

Pre­vi­ously, AMF ex­pected to col­lect data on malaria case rates from the re­gions in which it funded LLIN dis­tri­bu­tions: […] In 2016, AMF shared malaria case rate data […] but we have not pri­ori­tized an­a­lyz­ing it closely. AMF be­lieves that this data is not high qual­ity enough to re­li­ably in­di­cate ac­tual trends in malaria case rates, so we do not be­lieve that the fact that AMF col­lects malaria case rate data is a con­sid­er­a­tion in AMF’s fa­vor, and do not plan to con­tinue to track AMF’s progress in col­lect­ing malaria case rate data.

The data was noisy, so they sim­ply stopped check­ing whether AMF’s bed net dis­tri­bu­tions do any­thing about malaria.

If we want to know the size of the im­prove­ment made by GiveWell in the de­vel­op­ing world, we have their pre­dic­tions about cost-effec­tive­ness, an au­dit trail ver­ify­ing that work was performed, and their di­rect mea­sure­ment of how much money peo­ple gave be­cause they trusted GiveWell. The pre­dic­tions on the fi­nal tar­get – im­proved out­comes – have not been tested.

GiveWell is ac­tu­ally do­ing un­usu­ally well as far as ma­jor fun­ders go. It sticks to de­scribing things it’s ac­tu­ally re­spon­si­ble for. By con­trast, the Gates Foun­da­tion, in a re­port to War­ren Buffet claiming to de­scribe its im­pact, sim­ply de­scribed over­all im­prove­ment in the de­vel­op­ing world, a very small rhetor­i­cal step from claiming credit for 100% of the im­prove­ment. GiveWell at least sticks to facts about GiveWell’s own effects, and this is to its credit. But, it fo­cuses on costs it has been able to im­pose, not benefits it has been able to cre­ate.

The Cen­tre for Effec­tive Altru­ism’s William MacAskill made a re­lated point back in 2012, though he talked about the lack of any sort of for­mal out­side val­i­da­tion or au­dit, rather than fo­cus­ing on em­piri­cal val­i­da­tion of out­comes:

As far as I know, GiveWell haven’t com­mis­sioned a thor­ough ex­ter­nal eval­u­a­tion of their recom­men­da­tions. […] This sur­prises me. Whereas busi­nesses have a nat­u­ral feed­back mechanism, namely profit or loss, re­search of­ten doesn’t, hence the need for peer-re­view within academia. This con­cern, when it comes to char­ity-eval­u­a­tion, is even greater. If GiveWell’s anal­y­sis and recom­men­da­tions had ma­jor flaws, or were sys­tem­at­i­cally bi­ased in some way, it would be challeng­ing for out­siders to work this out with­out a thor­ough in­de­pen­dent eval­u­a­tion. For­tu­nately, GiveWell has the re­sources to, for ex­am­ple, em­ploy two top de­vel­op­ment economists to each do an in­de­pen­dent re­view of their recom­men­da­tions and the sup­port­ing re­search. This would make their recom­men­da­tions more ro­bust at a rea­son­able cost.

GiveWell’s page on self-eval­u­a­tion says that it dis­con­tinued ex­ter­nal re­views in Au­gust 2013. This page links to an ex­pla­na­tion of the de­ci­sion, which con­cludes:

We con­tinue to be­lieve that it is im­por­tant to en­sure that our work is sub­jected to in-depth scrutiny. How­ever, at this time, the scrutiny we’re nat­u­rally re­ceiv­ing – com­bined with the high costs and limited ca­pac­ity for for­mal ex­ter­nal eval­u­a­tion – make us in­clined to post­pone ma­jor effort on ex­ter­nal eval­u­a­tion for the time be­ing.

That said,

  • >If some­one vol­un­teered to do (or fa­cil­i­tate) for­mal ex­ter­nal eval­u­a­tion, we’d wel­come this and would be happy to promi­nently post or link to crit­i­cism.

  • We do in­tend even­tu­ally to re-in­sti­tute for­mal ex­ter­nal eval­u­a­tion.

Four years later, as­sess­ing the cred­i­bil­ity of this as­surance is left as an ex­er­cise for the reader.

Ver­sion 1: GiveWell Labs and ac­tive funding

Then there was GiveWell Labs, later called the Open Philan­thropy Pro­ject. It looked into more po­ten­tial philan­thropic causes, where the ev­i­dence base might not be as cut-and-dried as that for the GiveWell top char­i­ties. One thing they learned was that in many ar­eas, there sim­ply weren’t shovel-ready pro­grams ready for fund­ing – a fun­der has to play a more ac­tive role. This shift was de­scribed by GiveWell co-founder Holden Karnofsky in his 2013 blog post, Challenges of pas­sive fund­ing:

By “pas­sive fund­ing,” I mean a dy­namic in which the fun­der’s role is to re­view oth­ers’ pro­pos­als/​ideas/​ar­gu­ments and pick which to fund, and by “ac­tive fund­ing,” I mean a dy­namic in which the fun­der’s role is to par­ti­ci­pate in – or lead – the de­vel­op­ment of a strat­egy, and find part­ners to “im­ple­ment” it. Ac­tive fun­ders, in other words, are par­ti­ci­pat­ing at some level in “man­age­ment” of part­ner or­ga­ni­za­tions, whereas pas­sive fun­ders are merely choos­ing be­tween plans that other non­prof­its have already come up with.

My in­stinct is gen­er­ally to try the most “pas­sive” ap­proach that’s fea­si­ble. Broadly speak­ing, it seems that a good part­ner or­ga­ni­za­tion will gen­er­ally know their field and en­vi­ron­ment bet­ter than we do and there­fore be best po­si­tioned to de­sign strat­egy; in ad­di­tion, I’d ex­pect a pro­ject to go bet­ter when its im­ple­menter has fully bought into the plan as op­posed to car­ry­ing out what the fun­der wants. How­ever, (a) this philos­o­phy seems to con­trast heav­ily with how most ex­ist­ing ma­jor fun­ders op­er­ate; (b) I’ve seen mul­ti­ple rea­sons to be­lieve the “ac­tive” ap­proach may have more rel­a­tive mer­its than we had origi­nally an­ti­ci­pated. […]

  • In the non­profit world of to­day, it seems to us that fun­der in­ter­ests are ma­jor drivers of which ideas that get pro­posed and fleshed out, and there­fore, as a fun­der, it’s im­por­tant to ex­press in­ter­ests rather than try­ing to be fully “pas­sive.”

  • While we still wish to err on the side of be­ing as “pas­sive” as pos­si­ble, we are rec­og­niz­ing the im­por­tance of clearly ar­tic­u­lat­ing our val­ues/​strat­egy, and also rec­og­niz­ing that an area can be un­der­funded even if we can’t eas­ily find shovel-ready fund­ing op­por­tu­ni­ties in it.

GiveWell earned some cred­i­bil­ity from its novel, ev­i­dence-based out­come-ori­ented ap­proach to char­ity eval­u­a­tion. But this cred­i­bil­ity was already – and still is – a sort of loan. We have GiveWell’s pre­dic­tions or promises of cost effec­tive­ness in terms of out­comes, and we have figures for money moved, from which we can in­fer how much we were promised in im­proved out­comes. As far as I know, no one’s gone back and checked whether those promises turned out to be true.

In the mean­time, GiveWell then lev­er­aged this cred­i­bil­ity by ex­tend­ing its meth­ods into more spec­u­la­tive do­mains, where less was check­able, and donors had to put more trust in the sub­jec­tive judg­ment of GiveWell an­a­lysts. This was called GiveWell Labs. At the time, this sort of com­pounded lev­er­age may have been sen­si­ble, but it’s im­por­tant to track whether a debt has been paid off or merely rol­led over.

Ver­sion 2: The Open Philan­thropy Pro­ject and con­trol-seeking

Fi­nally, the Open Philan­thropy made its largest-ever sin­gle grant to pur­chase its founder a seat on a ma­jor or­ga­ni­za­tion’s board. This rep­re­sents a tran­si­tion from mere ac­tive fund­ing to overtly pur­chas­ing in­fluence:

The Open Philan­thropy Pro­ject awarded a grant of $30 mil­lion ($10 mil­lion per year for 3 years) in gen­eral sup­port to OpenAI. This grant ini­ti­ates a part­ner­ship be­tween the Open Philan­thropy Pro­ject and OpenAI, in which Holden Karnofsky (Open Philan­thropy’s Ex­ec­u­tive Direc­tor, “Holden” through­out this page) will join OpenAI’s Board of Direc­tors and, jointly with one other Board mem­ber, over­see OpenAI’s safety and gov­er­nance work.

We ex­pect the pri­mary benefits of this grant to stem from our part­ner­ship with OpenAI, rather than sim­ply from con­tribut­ing fund­ing to­ward OpenAI’s work. While we would also ex­pect gen­eral sup­port for OpenAI to be likely benefi­cial on its own, the case for this grant hinges on the benefits we an­ti­ci­pate from our part­ner­ship, par­tic­u­larly the op­por­tu­nity to help play a role in OpenAI’s ap­proach to safety and gov­er­nance is­sues.

Clearly the value propo­si­tion is not in­creas­ing available funds for OpenAI, if OpenAI’s founders’ billion-dol­lar com­mit­ment to it is real:

Sam, Greg, Elon, Reid Hoff­man, Jes­sica Liv­ingston, Peter Thiel, Ama­zon Web Ser­vices (AWS), In­fosys, and YC Re­search are donat­ing to sup­port OpenAI. In to­tal, these fun­ders have com­mit­ted $1 billion, al­though we ex­pect to only spend a tiny frac­tion of this in the next few years.

The Open Philan­thropy Pro­ject is nei­ther us­ing this money to fund pro­grams that have a track record of work­ing, nor to fund a spe­cific pro­gram that it has prior rea­son to ex­pect will do good. Rather, it is buy­ing con­trol, in the hope that Holden will be able to per­suade OpenAI not to de­stroy the world, be­cause he knows bet­ter than OpenAI’s founders.

How does the Open Philan­thropy Pro­ject know that Holden knows bet­ter? Well, it’s done some ac­tive fund­ing of pro­grams it ex­pects to work out. It ex­pects those pro­grams to work out be­cause they were ap­proved by a pro­cess similar to the one used by GiveWell to find char­i­ties that it ex­pects to save lives.

If you want to ac­quire con­trol over some­thing, that im­plies that you think you can man­age it more sen­si­bly than who­ever is in con­trol already. Thus, buy­ing con­trol is a claim to have su­pe­rior judg­ment—not just over oth­ers fund­ing things (the origi­nal GiveWell pitch), but over those be­ing funded.

In a foot­note to the very post an­nounc­ing the grant, the Open Philan­thropy Pro­ject notes that it has his­tor­i­cally tried to avoid ac­quiring lev­er­age over or­ga­ni­za­tions it sup­ports, pre­cisely be­cause it’s not sure it knows bet­ter:

For now, we note that pro­vid­ing a high pro­por­tion of an or­ga­ni­za­tion’s fund­ing may cause it to be de­pen­dent on us and ac­countable pri­mar­ily to us. This may mean that we come to be seen as more re­spon­si­ble for its ac­tions than we want to be; it can also mean we have to choose be­tween pro­vid­ing bad and pos­si­bly dis­tortive guidance/​feed­back (un­bal­anced by other stake­hold­ers’ guidance/​feed­back) and leav­ing the or­ga­ni­za­tion with es­sen­tially no ac­countabil­ity.

This seems to de­scribe two main prob­lems in­tro­duced by be­com­ing a dom­i­nant fun­der:

  1. Peo­ple might ac­cu­rately at­tribute causal re­spon­si­bil­ity for some of the or­ga­ni­za­tion’s con­duct to the Open Philan­thropy Pro­ject.

  2. The Open Philan­thropy Pro­ject might in­fluence the or­ga­ni­za­tion to be­have differ­ently than it oth­er­wise would.

The first seems ob­vi­ously silly. I’ve been try­ing to cor­rect the im­bal­ance where Open Phil is crit­i­cized mainly when it makes grants, by crit­i­ciz­ing it for hold­ing onto too much money.

The sec­ond re­ally is a cost as well as a benefit, and the Open Philan­thropy Pro­ject has been ab­solutely cor­rect to rec­og­nize this. This is the sort of thing GiveWell has con­sis­tently got­ten right since the be­gin­ning and it de­serves credit for mak­ing this prin­ci­ple clear and – un­til now – liv­ing up to it.

But dis­com­fort with be­ing dom­i­nant fun­ders seems in­con­sis­tent with buy­ing a board seat to in­fluence OpenAI. If the Open Philan­thropy Pro­ject thinks that Holden’s judg­ment is good enough that he should be in con­trol, why only here? If he thinks that other Open Philan­thropy Pro­ject AI safety grantees have good judg­ment but OpenAI doesn’t, why not give them similar amounts of money free of strings to spend at their dis­cre­tion and see what hap­pens? Why not buy peo­ple like Eliezer Yud­kowsky, Nick Bostrom, or Stu­art Rus­sell a seat on OpenAI’s board?

On the other hand, the Open Philan­thropy Pro­ject is right on the mer­its here with re­spect to safe su­per­in­tel­li­gence de­vel­op­ment. Open­ness makes sense for weak AI, but if you’re build­ing true strong AI you want to make sure you’re co­op­er­at­ing with all the other teams in a sin­gle closed effort. I agree with the Open Philan­thropy Pro­ject’s as­sess­ment of the rele­vant risks. But it’s not clear to me how of­ten join­ing the bad guys to pre­vent their worst ex­cesses is a good strat­egy, and it seems like it has to of­ten be a mis­take. Still, I’m mind­ful of heroes like John Rabe, Chiune Sugihara, and Os­car Sch­indler. And if I think some­one has a good idea for im­prov­ing things, it makes sense to re­al­lo­cate con­trol from peo­ple who have worse ideas, even if there’s some po­ten­tial bet­ter al­lo­ca­tion.

On the other hand, is Holden Karnofsky the right per­son to do this? The case is mixed.

He listens to and en­gages with the ar­gu­ments from prin­ci­pled ad­vo­cates for AI safety re­search, such as Nick Bostrom, Eliezer Yud­kowsky, and Stu­art Rus­sell. This is a point in his fa­vor. But, I can think of other peo­ple who en­gage with such ar­gu­ments. For in­stance, OpenAI founder Elon Musk has pub­li­cly praised Bostrom’s book Su­per­in­tel­li­gence, and founder Sam Alt­man has writ­ten two blog posts sum­ma­riz­ing con­cerns about AI safety rea­son­ably co­gently. Alt­man even asked Luke Muehlhauser, former ex­ec­u­tive di­rec­tor of MIRI, for feed­back pre-pub­li­ca­tion. He’s met with Nick Bostrom. That sug­gests a sub­stan­tial level of di­rect en­gage­ment with the field, al­though Holden has en­gaged for a longer time, more ex­ten­sively, and more di­rectly.

Another point in Holden’s fa­vor, from my per­spec­tive, is that un­der his lead­er­ship, the Open Philan­thropy Pro­ject has funded the most se­ri­ous-seem­ing pro­grams for both weak and strong AI safety re­search. But Musk also man­aged to (in­di­rectly) fund AI safety re­search at MIRI and by Nick Bostrom per­son­ally, via his $10 mil­lion FLI grant.

The Open Philan­thropy Pro­ject also says that it ex­pects to learn a lot about AI re­search from this, which will help it make bet­ter de­ci­sions on AI risk in the fu­ture and in­fluence the field in the right way. This is rea­son­able as far as it goes. But re­mem­ber that the case for po­si­tion­ing the Open Philan­thropy Pro­ject to do this re­lies on the as­sump­tion that the Open Philan­thropy Pro­ject will im­prove mat­ters by be­com­ing a cen­tral in­fluencer in this field. This move is con­sis­tent with reach­ing that goal, but it is not in­de­pen­dent ev­i­dence that the goal is the right one.

Over­all, there are good nar­row rea­sons to think that this is a po­ten­tial im­prove­ment over the prior situ­a­tion around OpenAI – but only a small and ill-defined im­prove­ment, at con­sid­er­able at­ten­tional cost, and with the offset­ting po­ten­tial harm of in­creas­ing OpenAI’s per­ceived le­gi­t­i­macy as a long-run AI safety or­ga­ni­za­tion.

And it’s wor­ry­ing that Open Philan­thropy Pro­ject’s largest grant – not just for AI risk, but ever (aside from GiveWell Top Char­ity fund­ing) – is be­ing made to an or­ga­ni­za­tion at which Holden’s house­mate and fu­ture brother-in-law is a lead­ing re­searcher. The nepo­tism ar­gu­ment is not my cen­tral ob­jec­tion. If I oth­er­wise thought the grant were ob­vi­ously a good idea, it wouldn’t worry me, be­cause it’s nat­u­ral for peo­ple with shared val­ues and out­looks to be­come close non­pro­fes­sion­ally as well. But in the ab­sence of a clear com­pel­ling spe­cific case for the grant, it’s wor­ry­ing.

Al­to­gether, I’m not say­ing this is an un­rea­son­able shift, con­sid­ered in iso­la­tion. I’m not even sure this is a bad thing for the Open Philan­thropy Pro­ject to be do­ing – in­sid­ers may have in­for­ma­tion that I don’t, and that is difficult to com­mu­ni­cate to out­siders. But as out­siders, there comes a point when some­one’s maxed out their moral credit, and we should wait for re­sults be­fore ac­tively try­ing to en­trust the Open Philan­thropy Pro­ject and its staff with more re­spon­si­bil­ity.

EA Funds and self-recommendation

The Cen­tre for Effec­tive Altru­ism is ac­tively try­ing to en­trust the Open Philan­thropy Pro­ject and its staff with more re­spon­si­bil­ity.

The con­cerns of CEA’s CEO William MacAskill about GiveWell have, as far as I can tell, never been ad­dressed, and the un­der­ly­ing is­sues have only be­come more acute. But CEA is now work­ing to put more money un­der the con­trol of Open Philan­thropy Pro­ject staff, through its new EA Funds product – a way for sup­port­ers to del­e­gate giv­ing de­ci­sions to ex­pert EA “fund man­agers” by giv­ing to one of four funds: Global Health and Devel­op­ment, An­i­mal Welfare, Long-Term Fu­ture, and Effec­tive Altru­ism Com­mu­nity.

The Effec­tive Altru­ism move­ment be­gan by say­ing that be­cause very poor peo­ple ex­ist, we should re­al­lo­cate money from or­di­nary peo­ple in the de­vel­oped world to the global poor. Now the pitch is in effect that be­cause very poor peo­ple ex­ist, we should re­al­lo­cate money from or­di­nary peo­ple in the de­vel­oped world to the ex­tremely wealthy. This is a strange and sur­pris­ing place to end up, and it’s worth re­trac­ing our steps. Again, I find it eas­iest to think of three stages:

  1. Money can go much farther in the de­vel­op­ing world. Here, we’ve found some ex­am­ples for you. As a re­sult, you can do a huge amount of good by giv­ing away a large share of your in­come, so you ought to.

  2. We’ve found ways for you to do a huge amount of good by giv­ing away a large share of your in­come for de­vel­op­ing-world in­ter­ven­tions, so you ought to trust our recom­men­da­tions. You ought to give a large share of your in­come to these weird things our friends are do­ing that are even bet­ter, or join our friends.

  3. We’ve found ways for you to do a huge amount of good by fund­ing weird things our friends are do­ing, so you ought to trust the peo­ple we trust. You ought to give a large share of your in­come to a multi-billion-dol­lar foun­da­tion that funds such things.

Stage 1: The di­rect pitch

At first, Giv­ing What We Can (the or­ga­ni­za­tion that even­tu­ally be­came CEA) had a sim­ple, easy to un­der­stand pitch:

Giv­ing What We Can is the brain­child of Toby Ord, a philoso­pher at Bal­liol Col­lege, Oxford. In­spired by the ideas of ethi­cists Peter Singer and Thomas Pogge, Toby de­cided in 2009 to com­mit a large pro­por­tion of his in­come to char­i­ties that effec­tively alle­vi­ate poverty in the de­vel­op­ing world.

[…]

Dis­cov­er­ing that many of his friends and col­leagues were in­ter­ested in mak­ing a similar pledge, Toby worked with fel­low Oxford philoso­pher Will MacAskill to cre­ate an in­ter­na­tional or­ga­ni­za­tion of peo­ple who would donate a sig­nifi­cant pro­por­tion of their in­come to cost-effec­tive char­i­ties.

Giv­ing What We Can launched in Novem­ber 2009, at­tract­ing sig­nifi­cant me­dia at­ten­tion. Within a year, 64 peo­ple had joined the so­ciety, their pledged dona­tions amount­ing to $21 mil­lion. Ini­tially run on a vol­un­teer ba­sis, Giv­ing What We Can took on full-time staff in the sum­mer of 2012.

In effect, its ar­gu­ment was: “Look, you can do huge amounts of good by giv­ing to peo­ple in the de­vel­op­ing world. Here are some ex­am­ples of char­i­ties that do that. It seems like a great idea to give 10% of our in­come to those char­i­ties.”

GWWC was a sim­ple product, with a clear, limited scope. Its founders be­lieved that peo­ple, in­clud­ing them, ought to do a thing – so they ar­gued di­rectly for that thing, us­ing the ar­gu­ments that had per­suaded them. If it wasn’t for you, it was easy to figure that out; but a sur­pris­ingly large num­ber of peo­ple were per­suaded by a sim­ple, di­rect state­ment of the ar­gu­ment, took the pledge, and gave a lot of money to char­i­ties helping the world’s poor­est.

Stage 2: Rhetoric and be­lief diverge

Then, GWWC staff were per­suaded you could do even more good with your money in ar­eas other than de­vel­op­ing-world char­ity, such as ex­is­ten­tial risk miti­ga­tion. En­courag­ing dona­tions and work in these ar­eas be­came part of the broader Effec­tive Altru­ism move­ment, and GWWC’s um­brella or­ga­ni­za­tion was named the Cen­tre for Effec­tive Altru­ism. So far, so good.

But this left Effec­tive Altru­ism in an awk­ward po­si­tion; while lead­er­ship of­ten per­son­ally be­lieve the most effec­tive way to do good is far-fu­ture stuff or similarly weird-sound­ing things, many peo­ple who can see the mer­its of the de­vel­op­ing-world char­ity ar­gu­ment re­ject the ar­gu­ment that be­cause the vast ma­jor­ity of peo­ple live in the far fu­ture, even a very small im­prove­ment in hu­man­ity’s long-run prospects out­weighs huge im­prove­ments on the global poverty front. They also of­ten re­ject similar scope-sen­si­tive ar­gu­ments for things like an­i­mal char­i­ties.

Giv­ing What We Can’s page on what we can achieve still fo­cuses on global poverty, be­cause de­vel­op­ing-world char­ity is eas­ier to ex­plain per­sua­sively. How­ever, EA lead­er­ship tends to pri­vately fo­cus on things like AI risk. Two years ago many at­ten­dees at the EA Global con­fer­ence in the San Fran­cisco Bay Area were sur­prised that the con­fer­ence fo­cused so heav­ily on AI risk, rather than the global poverty in­ter­ven­tions they’d ex­pected.

Stage 3: Effec­tive al­tru­ism is self-recommending

Shortly be­fore the launch of the EA Funds I was told in in­for­mal con­ver­sa­tions that they were a re­sponse to de­mand. Giv­ing What We Can pledge-tak­ers and other EA donors had told CEA that they trusted it to GWWC pledge-taker de­mand. CEA was re­spond­ing by cre­at­ing a product for the peo­ple who wanted it.

This seemed pretty rea­son­able to me, and on the whole good. If some­one wants to trust you with their money, and you think you can do some­thing good with it, you might as well take it, be­cause they’re es­ti­mat­ing your skill above theirs. But not ev­ery­one agrees, and as the Mad­off case demon­strates, “peo­ple are beg­ging me to take their money” is not a defini­tive ar­gu­ment that you are do­ing any­thing real.

In prac­tice, the funds are man­aged by Open Philan­thropy Pro­ject staff:

We want to keep this idea as sim­ple as pos­si­ble to be­gin with, so we’ll have just four funds, with the fol­low­ing man­agers:

  • Global Health and Devel­op­ment—Elie Hassenfeld

  • An­i­mal Welfare – Lewis Bollard

  • Long-run fu­ture – Nick Beckstead

  • Move­ment-build­ing – Nick Beckstead

(Note that the meta-char­ity fund will be able to fund CEA; and note that Nick Beck­stead is a Trus­tee of CEA. The long-run fu­ture fund and the meta-char­ity fund con­tinue the work that Nick has been do­ing run­ning the EA Giv­ing Fund.)

It’s not a co­in­ci­dence that all the fund man­agers work for GiveWell or Open Philan­thropy. First, these are the or­gani­sa­tions whose char­ity eval­u­a­tion we re­spect the most. The worst-case sce­nario, where your dona­tion just adds to the Open Philan­thropy fund­ing within a par­tic­u­lar area, is there­fore still a great out­come. Se­cond, they have the best in­for­ma­tion available about what grants Open Philan­thropy are plan­ning to make, so have a good un­der­stand­ing of where the re­main­ing fund­ing gaps are, in case they feel they can use the money in the EA Fund to fill a gap that they feel is im­por­tant, but isn’t cur­rently ad­dressed by Open Philan­thropy.

In past years, Giv­ing What We Can recom­men­da­tions have largely over­lapped with GiveWell’s top char­i­ties.

In the com­ments on the launch an­nounce­ment on the EA Fo­rum, sev­eral peo­ple (in­clud­ing me) pointed out that the Open Philan­thropy Pro­ject seems to be hav­ing trou­ble giv­ing away even the money it already has, so it seems odd to di­rect more money to Open Philan­thropy Pro­ject de­ci­sion­mak­ers. CEA’s se­nior mar­ket­ing man­ager replied that the Funds were a min­i­mum vi­able product to test the con­cept:

I don’t think the long-term goal is that OpenPhil pro­gram officers are the only fund man­agers. Work­ing with them was the best way to get an MVP ver­sion in place.

This also seemed okay to me, and I said so at the time.

[NOTE: I’ve ed­ited the next para­graph to ex­cise some un­re­li­able in­for­ma­tion. Sorry for the er­ror, and thanks to Rob Wiblin for point­ing it out.]

After they were launched, though, I saw phras­ings that were not so cau­tious at all, in­stead mak­ing claims that this was gen­er­ally a bet­ter way to give. As of writ­ing this, if some­one on the effec­tivealtru­ism.org web­site clicks on “Donate Effec­tively” they will be led di­rectly to a page pro­mot­ing EA Funds. When I looked at Giv­ing What We Can’s top char­i­ties page in early April, it recom­mended the EA Funds “as the high­est im­pact op­tion for donors.”

This is not a re­sponse to de­mand, it is an at­tempt to cre­ate de­mand by us­ing CEA’s au­thor­ity, tel­ling peo­ple that the funds are bet­ter than what they’re do­ing already. By con­trast, GiveWell’s Top Char­i­ties page sim­ply says:

Our top char­i­ties are ev­i­dence-backed, thor­oughly vet­ted, un­der­funded or­ga­ni­za­tions.

This care­fully avoids any overt claim that they’re the high­est-im­pact op­tion available to donors. GiveWell avoids say­ing that be­cause there’s no way they could know it, so say­ing it wouldn’t be truth­ful.

A mar­ket­ing email might have just been dashed off quickly, and an ex­ag­ger­ated word­ing might just have been an over­sight. But when I looked at Giv­ing What We Can’s top char­i­ties page in early April, it recom­mended the EA Funds “as the high­est im­pact op­tion for donors.”

The word­ing has since been qual­ified with “for most donors”, which is a good change. But the thing I’m wor­ried about isn’t just the ex­plicit ex­ag­ger­ated claims – it’s the un­der­ly­ing mar­ket­ing mind­set that made them seem like a good idea in the first place. EA seems to have switched from an en­dorse­ment of the best things out­side it­self, to an en­dorse­ment of it­self. And it’s con­cen­trat­ing de­ci­sion­mak­ing power in the Open Philan­thropy Pro­ject.

Effec­tive al­tru­ism is overex­tended, but it doesn’t have to be

There is a say­ing in fi­nance, that was old even back when Keynes said it. If you owe the bank a mil­lion dol­lars, then you have a prob­lem. If you owe the bank a billion dol­lars, then the bank has a prob­lem.

In other words, if some­one ex­tends you a level of trust they could sur­vive writ­ing off, then they might call in that loan. As a re­sult, they have lev­er­age over you. But if they overex­tend, putting all their eggs in one bas­ket, and you are that bas­ket, then you have lev­er­age over them; you’re too big to fail. Let­ting you fail would be so dis­as­trous for their in­ter­ests that you can ex­tract nearly ar­bi­trary con­ces­sions from them, in­clud­ing fur­ther in­vest­ment. For this rea­son, suc­cess­ful in­sti­tu­tions of­ten try to di­ver­sify their in­vest­ments, and avoid overex­tend­ing them­selves. Reg­u­la­tors, for the same rea­son, try to pre­vent banks from be­com­ing “too big to fail.”

The Effec­tive Altru­ism move­ment is con­cen­trat­ing de­ci­sion­mak­ing power and trust as much as pos­si­ble, in a way that’s set­ting it­self up to in­vest ever in­creas­ing amounts of con­fi­dence to keep the game go­ing.

The al­ter­na­tive is to keep the scope of each or­ga­ni­za­tion nar­row, overtly ask for trust for each ven­ture sep­a­rately, and make it clear what sorts of pro­grams are be­ing funded. For in­stance, Giv­ing What We Can should go back to its ini­tial fo­cus of global poverty re­lief.

Like many EA lead­ers, I hap­pen to be­lieve that any­thing you can do to steer the far fu­ture in a bet­ter di­rec­tion is much, much more con­se­quen­tial for the well-be­ing of sen­tient crea­tures than any purely short-run im­prove­ment you can cre­ate now. So it might seem odd that I think Giv­ing What We Can should stay fo­cused on global poverty. But, I be­lieve that the sin­gle most im­por­tant thing we can do to im­prove the far fu­ture is hold onto our abil­ity to ac­cu­rately build shared mod­els. If we use bait-and-switch tac­tics, we are ac­tively erod­ing the most im­por­tant type of cap­i­tal we have – co­or­di­na­tion ca­pac­ity.

If you do not think giv­ing 10% of one’s in­come to global poverty char­i­ties is the right thing to do, then you can’t in full in­tegrity urge oth­ers to do it – so you should stop. You might still be­lieve that GWWC ought to ex­ist. You might still be­lieve that it is a pos­i­tive good to en­courage peo­ple to give much of their in­come to help the global poor, if they wouldn’t have been do­ing any­thing else es­pe­cially effec­tive with the money. If so, and you hap­pen to find your­self in charge of an or­ga­ni­za­tion like Giv­ing What We Can, the thing to do is write a let­ter to GWWC mem­bers tel­ling them that you’ve changed your mind, and why, and offer­ing to give away the brand to who­ever seems best able to hon­estly main­tain it.

If some­one at the Cen­tre for Effec­tive Altru­ism fully be­lieves in GWWC’s origi­nal mis­sion, then that might make the tran­si­tion eas­ier. If not, then one still has to tell the truth and do what’s right.

And what of the EA Funds? The Long-Term Fu­ture Fund is run by Open Philan­thropy Pro­ject Pro­gram Officer Nick Beck­stead. If you think that it’s a good thing to del­e­gate giv­ing de­ci­sions to Nick, then I would agree with you. Nick’s a great guy! I’m always happy to see him when he shows up at house par­ties. He’s smart, and he ac­tively seeks out ar­gu­ments against his cur­rent point of view. But the right thing to do, if you want to per­suade peo­ple to del­e­gate their giv­ing de­ci­sions to Nick Beck­stead, is to make a prin­ci­pled case for del­e­gat­ing giv­ing de­ci­sions to Nick Beck­stead. If the Cen­tre for Effec­tive Altru­ism did that, then Nick would al­most cer­tainly feel more free to al­lo­cate funds to the best things he knows about, not just the best things he sus­pects EA Funds donors would be able to un­der­stand and agree with.

If you can’t di­rectly per­suade peo­ple, then maybe you’re wrong. If the prob­lem is in­fer­en­tial dis­tance, then you’ve got some work to do bridg­ing that gap.

There’s noth­ing wrong with set­ting up a fund to make it easy. It’s ac­tu­ally a re­ally good idea. But there is some­thing wrong with the mul­ti­ple lay­ers of vague in­di­rec­tion in­volved in the cur­rent mar­ket­ing of the Far Fu­ture fund – us­ing global poverty to sell the generic idea of do­ing the most good, then us­ing CEA’s iden­tity as the or­ga­ni­za­tion in charge of do­ing the most good to per­suade peo­ple to del­e­gate their giv­ing de­ci­sions to it, and then send­ing their money to some dude at the multi-billion-dol­lar foun­da­tion to give away at his per­sonal dis­cre­tion. The same ar­gu­ment ap­plies to all four Funds.

Like­wise, if you think that work­ing di­rectly on AI risk is the most im­por­tant thing, then you should make ar­gu­ments di­rectly for work­ing on AI risk. If you can’t di­rectly per­suade peo­ple, then maybe you’re wrong. If the prob­lem is in­fer­en­tial dis­tance, it might make sense to imi­tate the ex­am­ple of some­one like Eliezer Yud­kowsky, who used in­di­rect meth­ods to bridge the in­fer­en­tial gap by writ­ing ex­ten­sively on in­di­vi­d­ual hu­man ra­tio­nal­ity, and did not try to con­trol oth­ers’ ac­tions in the mean­time.

If Holden thinks he should be in charge of some AI safety re­search, then he should ask Good Ven­tures for funds to ac­tu­ally start an AI safety re­search or­ga­ni­za­tion. I’d be ex­cited to see what he’d come up with if he had full con­trol of and re­spon­si­bil­ity for such an or­ga­ni­za­tion. But I don’t think any­one has a good plan to work di­rectly on AI risk, and I don’t have one ei­ther, which is why I’m not di­rectly work­ing on it or fund­ing it. My plan for im­prov­ing the far fu­ture is to build hu­man co­or­di­na­tion ca­pac­ity.

(If, by con­trast, Holden just thinks there needs to be co­or­di­na­tion be­tween differ­ent AI safety or­ga­ni­za­tions, the ob­vi­ous thing to do would be to work with FLI on that, e.g. by giv­ing them enough money to throw their weight around as a fun­der. They or­ga­nized the suc­cess­ful Puerto Rico con­fer­ence, af­ter all.)

Another thing that would be en­courag­ing would be if at least one of the Funds were not ad­ministered en­tirely by an Open Philan­thropy Pro­ject staffer, and ideally an ex­pert who doesn’t benefit from the halo of “be­ing an EA.” For in­stance, Chris Blattman is a de­vel­op­ment economist with ex­pe­rience de­sign­ing pro­grams that don’t just use but gen­er­ate ev­i­dence on what works. When peo­ple were ar­gu­ing about whether sweat­shops are good or bad for the global poor, he ac­tu­ally went and looked by perform­ing a ran­dom­ized con­trol­led trial. He’s lead­ing two new ini­ti­a­tives with J-PAL and IPA, and ex­pects that di­rec­tors de­sign­ing stud­ies will also have to spend time fundrais­ing. Hav­ing fund­ing lined up seems like the sort of thing that would let them spend more time ac­tu­ally run­ning pro­grams. And more gen­er­ally, he seems likely to know about fund­ing op­por­tu­ni­ties the Open Philan­thropy Pro­ject doesn’t, sim­ply be­cause he’s em­bed­ded in a slightly differ­ent part of the global health and de­vel­op­ment net­work.

Nar­rower pro­jects that rely less on the EA brand and more on what they’re ac­tu­ally do­ing, and more co­op­er­a­tion on equal terms with out­siders who seem to be do­ing some­thing good already, would do a lot to help EA grow be­yond putting stick­ers on its own be­hav­ior chart. I’d like to see EA grow up. I’d be ex­cited to see what it might do.

Summary

  1. Good pro­grams don’t need to dis­tort the story peo­ple tell about them, while bad pro­grams do.

  2. Mo­ral con­fi­dence games – treat­ing past promises and trust as a track record to jus­tify more trust – are an ex­am­ple of the kind of dis­tor­tion men­tioned in (1), that benefits bad pro­grams more than good ones.

  3. The Open Philan­thropy Pro­ject’s Open AI grant rep­re­sents a shift from eval­u­at­ing other pro­grams’ effec­tive­ness, to as­sum­ing its own effec­tive­ness.

  4. EA Funds rep­re­sents a shift from EA eval­u­at­ing pro­grams’ effec­tive­ness, to as­sum­ing EA’s effec­tive­ness.

  5. A shift from eval­u­at­ing other pro­grams’ effec­tive­ness, to as­sum­ing one’s own effec­tive­ness, is an ex­am­ple of the kind of “moral con­fi­dence game” men­tioned in (2).

  6. EA ought to fo­cus on scope-limited pro­jects, so that it can di­rectly make the case for those par­tic­u­lar pro­jects in­stead of rely­ing on EA iden­tity as a rea­son to sup­port an EA or­ga­ni­za­tion.

  7. EA or­ga­ni­za­tions ought to en­trust more re­spon­si­bil­ity to out­siders who seem to be do­ing good things but don’t overtly iden­tify as EA, in­stead of try­ing to keep it all in the fam­ily.

(Cross-posted at my per­sonal blog and the EA Fo­rum.
Dis­clo­sure: I know many peo­ple in­volved at many of the or­ga­ni­za­tions dis­cussed, and I used to work for GiveWell. I have no cur­rent in­sti­tu­tional af­fili­a­tion to any of them. Every­one men­tioned has always been nice to me and I have no per­sonal com­plaints.)