MIRI’s 2016 Fundraiser

Up­date De­cem­ber 22: Our donors came to­gether dur­ing the fundraiser to get us most of the way to our $750,000 goal. In all, 251 donors con­tributed $589,248, mak­ing this our sec­ond-biggest fundraiser to date. Although we fell short of our tar­get by $160,000, we have since made up this short­fall thanks to Novem­ber/​De­cem­ber donors. I’m ex­tremely grate­ful for this sup­port, and will plan ac­cord­ingly for more staff growth over the com­ing year.

As de­scribed in our post-fundraiser up­date, we are still fairly fund­ing-con­strained. Dona­tions at this time will have an es­pe­cially large effect on our 2017–2018 hiring plans and strat­egy, as we try to as­sess our fu­ture prospects. For some ex­ter­nal en­dorse­ments of MIRI as a good place to give this win­ter, see re­cent eval­u­a­tions by Daniel Dewey, Nick Beck­stead, Owen Cot­ton-Bar­ratt, and Ben Hoskin.

Our 2016 fundraiser is un­der­way! Un­like in past years, we’ll only be run­ning one fundraiser in 2016, from Sep. 16 to Oct. 31. Our progress so far (up­dated live):

Donate Now

Em­ployer match­ing and pledges to give later this year also count to­wards the to­tal. Click here to learn more.

MIRI is a non­profit re­search group based in Berkeley, Cal­ifor­nia. We do foun­da­tional re­search in math­e­mat­ics and com­puter sci­ence that’s aimed at en­sur­ing that smarter-than-hu­man AI sys­tems have a pos­i­tive im­pact on the world. 2016 has been a big year for MIRI, and for the wider field of AI al­ign­ment re­search. Our 2016 strate­gic up­date in early Au­gust re­viewed a num­ber of re­cent de­vel­op­ments:

We also pub­lished new re­sults in de­ci­sion the­ory and log­i­cal un­cer­tainty, in­clud­ing “Para­met­ric bounded Löb’s the­o­rem and ro­bust co­op­er­a­tion of bounded agents” and “A for­mal solu­tion to the grain of truth prob­lem.” For a sur­vey of our re­search progress and other up­dates from last year, see our 2015 re­view. In the last three weeks, there have been three more ma­jor de­vel­op­ments:

  • We re­leased a new pa­per, “Log­i­cal in­duc­tion,” de­scribing a method for learn­ing to as­sign rea­son­able prob­a­bil­ities to math­e­mat­i­cal con­jec­tures and com­pu­ta­tional facts in a way that out­paces de­duc­tion.

  • The Open Philan­thropy Pro­ject awarded MIRI a one-year $500,000 grant to scale up our re­search pro­gram, with a strong chance of re­newal next year.

  • The Open Philan­thropy Pro­ject is sup­port­ing the launch of the new UC Berkeley Cen­ter for Hu­man-Com­pat­i­ble AI, headed by Stu­art Rus­sell.

Things have been mov­ing fast over the last nine months. If we can repli­cate last year’s fundrais­ing suc­cesses, we’ll be in an ex­cel­lent po­si­tion to move for­ward on our plans to grow our team and scale our re­search ac­tivi­ties.

The strate­gic landscape

Hu­mans are far bet­ter than other species at al­ter­ing our en­vi­ron­ment to suit our prefer­ences. This is pri­mar­ily due not to our strength or speed, but to our in­tel­li­gence, broadly con­strued—our abil­ity to rea­son, plan, ac­cu­mu­late sci­en­tific knowl­edge, and in­vent new tech­nolo­gies. AI is a tech­nol­ogy that ap­pears likely to have a uniquely large im­pact on the world be­cause it has the po­ten­tial to au­to­mate these abil­ities, and to even­tu­ally de­ci­sively sur­pass hu­mans on the rele­vant cog­ni­tive met­rics. Separate from the task of build­ing in­tel­li­gent com­puter sys­tems is the task of en­sur­ing that these sys­tems are al­igned with our val­ues. Align­ing an AI sys­tem re­quires sur­mount­ing a num­ber of se­ri­ous tech­ni­cal challenges, most of which have re­ceived rel­a­tively lit­tle schol­arly at­ten­tion to date. MIRI’s role as a non­profit in this space, from our per­spec­tive, is to help solve parts of the prob­lem that are a poor fit for main­stream in­dus­try and aca­demic groups. Our long-term plans are con­tin­gent on fu­ture de­vel­op­ments in the field of AI. Be­cause these de­vel­op­ments are highly un­cer­tain, we cur­rently fo­cus mostly on work that we ex­pect to be use­ful in a wide va­ri­ety of pos­si­ble sce­nar­ios. The more op­ti­mistic sce­nar­ios we con­sider of­ten look some­thing like this:

  • In the short term, a re­search com­mu­nity co­a­lesces, de­vel­ops a good in-prin­ci­ple un­der­stand­ing of what the rele­vant prob­lems are, and pro­duces for­mal tools for tack­ling these prob­lems. AI re­searchers move to­ward a min­i­mal con­sen­sus about best prac­tices, nor­mal­iz­ing dis­cus­sions of AI’s long-term so­cial im­pact, a risk-con­scious se­cu­rity mind­set, and work on er­ror tol­er­ance and value speci­fi­ca­tion.

  • In the medium term, re­searchers build on these foun­da­tions and de­velop a more ma­ture un­der­stand­ing. As we move to­ward a clearer sense of what smarter-than-hu­man AI sys­tems are likely to look like — some­thing closer to a cred­ible roadmap — we imag­ine the re­search com­mu­nity mov­ing to­ward in­creased co­or­di­na­tion and co­op­er­a­tion in or­der to dis­cour­age race dy­nam­ics.

  • In the long term, we would like to see AI-em­pow­ered pro­jects (as de­scribed by Dewey [2015]) used to avert ma­jor AI mishaps. For this pur­pose, we’d want to solve a weak ver­sion of the al­ign­ment prob­lem for limited AI sys­tems — sys­tems just ca­pa­ble enough to serve as use­ful lev­ers for pre­vent­ing AI ac­ci­dents and mi­suse.

  • In the very long term, we can hope to solve the “full” al­ign­ment prob­lem for highly ca­pa­ble, highly au­tonomous AI sys­tems. Ideally, we want to reach a po­si­tion where we can af­ford to wait un­til we reach sci­en­tific and in­sti­tu­tional ma­tu­rity—take our time to dot ev­ery i and cross ev­ery t be­fore we risk “lock­ing in” de­sign choices.

The above is a vague sketch, and we pri­ori­tize re­search we think would be use­ful in less op­ti­mistic sce­nar­ios as well. Ad­di­tion­ally, “short term” and “long term” here are rel­a­tive, and differ­ent timeline fore­casts can have very differ­ent policy im­pli­ca­tions. Still, the sketch may help clar­ify the di­rec­tions we’d like to see the re­search com­mu­nity move in. For more on our re­search fo­cus and method­ol­ogy, see our re­search page and MIRI’s Ap­proach.

Our or­ga­ni­za­tional plans

We cur­rently em­ploy seven tech­ni­cal re­search staff (six re­search fel­lows and one as­sis­tant re­search fel­low), plus two re­searchers signed on to join in the com­ing months and an ad­di­tional six re­search as­so­ci­ates and re­search in­terns.1 Our bud­get this year is about $1.75M, up from $1.65M in 2015 and $950k in 2014.2 Our even­tual goal (sub­ject to re­vi­sion) is to grow un­til we have be­tween 13 and 17 tech­ni­cal re­search staff, at which point our bud­get would likely be in the $3–4M range. If we reach that point suc­cess­fully while main­tain­ing a two-year run­way, we’re likely to shift out of growth mode. Our bud­get es­ti­mate for 2017 is roughly $2–2.2M, which means that we’re en­ter­ing this fundraiser with about 14 months’ run­way. We’re un­cer­tain about how many dona­tions we’ll re­ceive be­tween Novem­ber and next Septem­ber,3 but pro­ject­ing from cur­rent trends, we ex­pect about 4/​5ths of our to­tal dona­tions to come from the fundraiser and 1/​5th to come in off-fundraiser.4 Based on this, we have the fol­low­ing fundraiser goals:

Ba­sic tar­get - $750,000. We feel good about our abil­ity to ex­e­cute our growth plans at this fund­ing level. We’ll be able to move for­ward com­fortably, albeit with some­what more cau­tion than at the higher tar­gets.

Growth tar­get - $1,000,000. This would amount to about half a year’s run­way. At this level, we can af­ford to make more un­cer­tain but high-ex­pected-value bets in our growth plans. There’s a risk that we’ll dip be­low a year’s run­way in 2017 if we make more hires than ex­pected, but the grow­ing sup­port of our donor base would make us feel com­fortable about tak­ing such risks.

Stretch tar­get - $1,250,000. At this level, even if we ex­ceed my growth ex­pec­ta­tions, we’d be able to grow with­out real risk of dip­ping be­low a year’s run­way. Past $1.25M we would not ex­pect ad­di­tional dona­tions to af­fect our 2017 plans much, as­sum­ing mod­er­ate off-fundraiser sup­port.5

If we hit our growth and stretch tar­gets, we’ll be able to ex­e­cute sev­eral ad­di­tional pro­grams we’re con­sid­er­ing with more con­fi­dence. Th­ese in­clude con­tract­ing a larger pool of re­searchers to do early work with us on log­i­cal in­duc­tion and on our ma­chine learn­ing agenda, and gen­er­ally spend­ing more time on aca­demic out­reach, field-grow­ing, and train­ing or tri­al­ing po­ten­tial col­lab­o­ra­tors and hires. As always, you’re in­vited to get in touch if you have ques­tions about our up­com­ing plans and re­cent ac­tivi­ties. I’m very much look­ing for­ward to see­ing what new mile­stones the grow­ing al­ign­ment re­search com­mu­nity will hit in the com­ing year, and I’m very grate­ful for the thought­ful en­gage­ment and sup­port that’s helped us get to this point.

Donate Now


Pledge to Give

1 This ex­cludes Katja Grace, who heads the AI Im­pacts pro­ject us­ing a sep­a­rate pool of funds ear­marked for strat­egy/​fore­cast­ing re­search. It also ex­cludes me: I con­tribute to our tech­ni­cal re­search, but my pri­mary role is ad­minis­tra­tive. (back)

2 We ex­pect to be slightly un­der the $1.825M bud­get we pre­vi­ously pro­jected for 2016, due to tak­ing on fewer new re­searchers than ex­pected this year. (back)

3 We’re imag­in­ing con­tin­u­ing to run one fundraiser per year in fu­ture years, pos­si­bly in Septem­ber. (back)

4 Separately, the Open Philan­thropy Pro­ject is likely to re­new our $500,000 grant next year, and we ex­pect to re­ceive the fi­nal ($80,000) in­stal­l­ment from the Fu­ture of Life In­sti­tute’s three-year grants. For com­par­i­son, our rev­enue was about $1.6 mil­lion in 2015: $167k in grants, $960k in fundraiser con­tri­bu­tions, and $467k in off-fundraiser (non-grant) con­tri­bu­tions. Our situ­a­tion in 2015 was some­what differ­ent, how­ever: we ran two 2015 fundraisers, whereas we’re skip­ping our win­ter fundraiser this year and ad­vis­ing De­cem­ber donors to pledge early or give off-fundraiser. (back)

5 At sig­nifi­cantly higher fund­ing lev­els, we’d con­sider run­ning other use­ful pro­grams, such as a prize fund. Shoot me an e-mail if you’d like to talk about the de­tails. (back)