MIRI’s 2018 Fundraiser

(Cross­posted from the MIRI blog)

Up­date Nov. 28: We’ve up­dated this post to an­nounce that Haskell de­vel­oper Ed­ward Kmett has joined the MIRI team! Ad­di­tion­ally, we’ve added links to some of our re­cent Agent Foun­da­tions re­search.

Our donors showed up in a big way for Giv­ing Tues­day! Through Dec. 29, we’re also in­cluded in a match­ing op­por­tu­nity by pro­fes­sional poker play­ers Dan Smith, Aaron Mer­chak, Matt Ash­ton, and Stephen Chid­wick, in part­ner­ship with Rais­ing for Effec­tive Giv­ing. As part of their Dou­ble-Up Drive, they’ll be match­ing up to $200,000 to MIRI and REG. Dona­tions can be made ei­ther on dou­ble­up­drive.com, or by donat­ing di­rectly on MIRI’s web­site and send­ing your dona­tion re­ceipt to re­ceipts@dou­ble­up­drive.com. We recom­mend the lat­ter, par­tic­u­larly for US tax res­i­dents (MIRI is a 501(c)(3) or­ga­ni­za­tion).

MIRI is run­ning its 2018 fundraiser through De­cem­ber 31! Our progress:

Donate now

MIRI is a math/​CS re­search non­profit with a mis­sion of max­i­miz­ing the po­ten­tial hu­man­i­tar­ian benefit of smarter-than-hu­man ar­tifi­cial in­tel­li­gence. You can learn more about the kind of work we do in “En­sur­ing Smarter-Than-Hu­man In­tel­li­gence Has A Pos­i­tive Out­come” and “Embed­ded Agency.”

Our fund­ing tar­gets this year are based on a goal of rais­ing enough in 2018 to match our “busi­ness-as-usual” bud­get next year. We view “make enough each year to pay for the next year” as a good heuris­tic for MIRI, given that we’re a quickly grow­ing non­profit with a healthy level of re­serves and a bud­get dom­i­nated by re­searcher salaries.

We fo­cus on busi­ness-as-usual spend­ing in or­der to fac­tor out the (likely very large) cost of mov­ing to new spaces in the next cou­ple of years as we con­tinue to grow, which in­tro­duces a high amount of var­i­ance to the model.[1]

My cur­rent model for our (busi­ness-as-usual, out­lier-free) 2019 spend­ing ranges from $4.4M to $5.5M, with a point es­ti­mate of $4.8M—up from $3.5M this year and $2.1M in 2017. The model takes as in­put es­ti­mated ranges for all our ma­jor spend­ing cat­e­gories, but the over­whelming ma­jor­ity of the var­i­ance comes from the num­ber of staff we’ll add to the team.[2]

In the main­line sce­nario, our 2019 bud­get break­down looks roughly like this:

In this sce­nario, we cur­rently have ~1.5 years of re­serves on hand. Since we’ve raised ~$4.3M be­tween Jan. 1 and Nov. 25,[3] our two tar­gets are:

  • Tar­get 1 ($500k), rep­re­sent­ing the differ­ence be­tween what we’ve raised so far this year, and our point es­ti­mate for busi­ness-as-usual spend­ing next year.

  • Tar­get 2 ($1.2M), what’s needed for our fund­ing streams to keep pace with our growth to­ward the up­per end of our pro­jec­tions.[4]

Below, we’ll sum­ma­rize what’s new at MIRI and talk more about our room for more fund­ing.

1. Re­cent updates

We’ve re­leased a string of new posts on our re­cent ac­tivi­ties and strat­egy:

  • 2018 Up­date: Our New Re­search Direc­tions dis­cusses the new set of re­search di­rec­tions we’ve been ramp­ing up over the last two years, how they re­late to our Agent Foun­da­tions re­search and our goal of “de­con­fu­sion,” and why we’ve adopted a “nondis­closed-by-de­fault” policy for this re­search.

  • Embed­ded Agency de­scribes our Agent Foun­da­tions re­search agenda as differ­ent an­gles of at­tack on a sin­gle cen­tral difficulty: we don’t know how to char­ac­ter­ize good rea­son­ing and de­ci­sion-mak­ing for agents em­bed­ded in their en­vi­ron­ment.

  • Sum­mer MIRI Up­dates dis­cusses new hires, new dona­tions and grants, and new pro­grams we’ve been run­ning to re­cruit re­searcher staff and grow the to­tal pool of AI al­ign­ment re­searchers.

And, added Nov. 28:

  • MIRI’s Newest Re­cruit: Ed­ward Kmett an­nounces our lat­est hire: noted Haskell de­vel­oper Ed­ward Kmett, who pop­u­larized the use of lenses for func­tional pro­gram­ming and main­tains a large num­ber of the libraries around the Haskell core libraries.

  • 2017 in Re­view re­caps our ac­tivi­ties and donors’ sup­port from last year.

Our 2018 Up­date also dis­cusses the much wider pool of en­g­ineers and com­puter sci­en­tists we’re now try­ing to re­cruit, and the much larger to­tal num­ber of peo­ple we’re try­ing to add to the team in the near fu­ture:

We’re seek­ing any­one who can cause our “be­come less con­fused about AI al­ign­ment” work to go faster.
In prac­tice, this means: peo­ple who na­tively think in math or code, who take se­ri­ously the prob­lem of be­com­ing less con­fused about AI al­ign­ment (quickly!), and who are gen­er­ally ca­pa­ble. In par­tic­u­lar, we’re look­ing for high-end Google pro­gram­mer lev­els of ca­pa­bil­ity; you don’t need a 1-in-a-mil­lion test score or a halo of des­tiny. You also don’t need a PhD, ex­plicit ML back­ground, or even prior re­search ex­pe­rience.

If the above might be you, and the idea of work­ing at MIRI ap­peals to you, I sug­gest send­ing in a job ap­pli­ca­tion or shoot­ing your ques­tions at Buck Sh­legeris, a re­searcher at MIRI who’s been helping with our re­cruit­ing.

We’re also hiring for Agent Foun­da­tions roles, though at a much slower pace. For those roles, we recom­mend in­ter­act­ing with us and other peo­ple ham­mer­ing on AI al­ign­ment prob­lems on Less Wrong and the AI Align­ment Fo­rum, or at lo­cal MIRIx groups. We then gen­er­ally hire Agent Foun­da­tions re­searchers from peo­ple we’ve got­ten to know through the above chan­nels, vis­its, and events like the AI Sum­mer Fel­lows pro­gram.

A great place to start de­vel­op­ing in­tu­itions for these prob­lems is Scott Garrabrant’s re­cently re­leased fixed point ex­er­cises, or var­i­ous re­sources on the MIRI Re­search Guide. Some ex­am­ples of re­cent pub­lic work on Agent Foun­da­tions /​ em­bed­ded agency in­clude:

Th­ese re­sults are rel­a­tively small, com­pared to Nate’s forth­com­ing tiling agents pa­per or Evan Hub­inger, Chris van Mer­wijk, Vladimir Mikulik, Joar Skalse, and Scott Garrabrant’s forth­com­ing “The In­ner Align­ment Prob­lem.” How­ever, they should give a good sense of some of the re­cent di­rec­tions we’ve been push­ing in with our Agent Foun­da­tions work.

2. Room for more funding

As noted above, the biggest sources of un­cer­tainty in our 2018 bud­get es­ti­mates are about how many re­search staff we hire, and how much we spend on mov­ing to new offices.

In our 2017 fundraiser, we set a goal of hiring 10 new re­search staff in 2018–2019. So far, we’re up two re­search staff, with enough promis­ing can­di­dates in the pipeline that I still con­sider 10 a doable (though am­bi­tious) goal.

Fol­low­ing the amaz­ing show of sup­port we re­ceived from donors last year (and con­tin­u­ing into 2018), we had sig­nifi­cantly more funds than we an­ti­ci­pated, and we found more ways to use­fully spend it than we ex­pected. In par­tic­u­lar, we’ve been able to trans­late the “bonus” sup­port we re­ceived in 2017 into broad­en­ing the scope of our re­cruit­ing efforts. As a con­se­quence, our 2018 spend­ing, which will come in at around $3.5M, ac­tu­ally matches the point es­ti­mate I gave in 2017 for our 2019 bud­get, rather than my pre­dic­tion for 2018—a large step up from what I pre­dicted, and an even larger step from last year’s bud­get of $2.1M.[5]

Our two fundraiser goals, Tar­get 1 ($500k) and Tar­get 2 ($1.2M), cor­re­spond to the bud­getary needs we can eas­ily pre­dict and ac­count for. Our 2018 went much bet­ter as a re­sult of donors’ greatly in­creased sup­port in 2017, and it’s pos­si­ble that we’re in a similar situ­a­tion to­day, though I’m not con­fi­dent that this is the case.

Con­cretely, some ways that our de­ci­sion-mak­ing changed as a re­sult of the amaz­ing sup­port we saw were:

  • We spent more on run­ning all-ex­penses-paid AI Risk for Com­puter Scien­tists work­shops. We ran the first such work­shop in Fe­bru­ary, and saw a lot of value in it as a venue for peo­ple with rele­vant tech­ni­cal ex­pe­rience to start rea­son­ing more about AI risk. Since then, we’ve run an­other seven events in 2018, with more planned for 2019.As hoped, these work­shops have also gen­er­ated in­ter­est in join­ing MIRI and other AI safety re­search teams. This has re­sulted in one full-time MIRI re­search staff hire, and on the or­der of ten can­di­dates with good prospects of join­ing full-time in 2019, in­clud­ing two re­cent in­terns.

  • We’ve been more con­sis­tently will­ing and able to pay com­pet­i­tive salaries for top tech­ni­cal tal­ent. A spe­cial case of this is hiring rel­a­tively se­nior re­search staff like Ed­ward Kmett.

  • We raised salaries for some ex­ist­ing staff mem­bers. We have a very com­mit­ted staff, and some staff at MIRI had pre­vi­ously asked for fairly low salaries in or­der to help keep MIRI’s or­ga­ni­za­tional costs lower. The in­con­ve­nience this caused staff mem­bers doesn’t make much sense at our cur­rent or­ga­ni­za­tional size, both in terms of our team’s pro­duc­tivity and in terms of their well-be­ing.

  • We ran a sum­mer re­search in­tern­ship pro­gram, on a larger scale than we oth­er­wise would have.

  • As we’ve con­sid­ered op­tions for new office space that can ac­com­mo­date our ex­pan­sion, we’ve been able to filter less on price rel­a­tive to fit. We’ve also been able to spend more on ren­o­va­tions that we ex­pect to pro­duce a work­ing en­vi­ron­ment where our re­searchers can do their work with fewer dis­trac­tions or in­con­ve­niences.

2018 brought a pos­i­tive up­date about our abil­ity to cost-effec­tively con­vert sur­prise fund­ing in­creases into (what look from our per­spec­tive like) very high-value ac­tions, and the above list hope­fully helps clar­ify what that can look like in prac­tice. We can’t promise to be able to re­peat that in 2019, given an over­shoot in this fundraiser, but we have rea­son for op­ti­mism.

Donate now

  1. That is, our busi­ness-as-usual model tries to re­move one-time out­lier costs so that it’s eas­ier to see what “the new nor­mal” is in MIRI’s spend­ing and think about our long-term growth curve.

  2. This es­ti­ma­tion is fairly rough and un­cer­tain. One weak­ness of this model is that it treats the in­puts as though they were in­de­pen­dent, which is not always the case. I also didn’t try to ac­count for the fact that we’re likely to spend more in wor­lds where we see more fundrais­ing suc­cess.
    How­ever, a sen­si­tivity anal­y­sis on the fi­nal out­put showed that the over­whelming ma­jor­ity of the un­cer­tainty in this es­ti­mate comes from how many re­search staff we hire, which matches my ex­pec­ta­tions and sug­gests that the model is do­ing a de­cent job of track­ing the in­tu­itively im­por­tant vari­ables. I also ended up with similar tar­gets when I ran the num­bers on our fund­ing sta­tus in other ways and when I con­sid­ered differ­ent fund­ing sce­nar­ios.

  3. This ex­cludes ear­marked fund­ing for AI Im­pacts, an in­de­pen­dent re­search group that’s in­sti­tu­tion­ally housed at MIRI.

  4. We could also think in terms of a “Tar­get 0.5” of $100k in or­der to hit the bot­tom of the range, $4.4M. How­ever, I wor­ried that a $100k tar­get would be mis­lead­ing given that we’re think­ing in terms of a $4.4–5.5M bud­get.

  5. Quot­ing our 2017 fundraiser post: “If we suc­ceed, our point es­ti­mate is that our 2018 bud­get will be $2.8M and our 2019 bud­get will be $3.5M, up from roughly $1.9M in 2017.” The $1.9M figure was an es­ti­mate from be­fore 2017 had ended. We’ve now re­vised this figure to $2.1M, which hap­pens to bring it in line with our 2016 point es­ti­mate for how much we’d spend in 2017.

No nominations.
No reviews.