MIRI’s 2015 Winter Fundraiser!

MIRI’s Win­ter Fundrais­ing Drive has be­gun! Our cur­rent progress, up­dated live:

Donate Now

Like our last fundraiser, this will be a non-match­ing fundraiser with mul­ti­ple fund­ing tar­gets our donors can choose be­tween to help shape MIRI’s tra­jec­tory. The drive will run un­til De­cem­ber 31st, and will help sup­port MIRI’s re­search efforts aimed at en­sur­ing that smarter-than-hu­man AI sys­tems have a pos­i­tive im­pact.

Our suc­cess­ful sum­mer fundraiser has helped de­ter­mine how am­bi­tious we’re mak­ing our plans. Although we may still slow down or ac­cel­er­ate our growth based on our fundrais­ing perfor­mance, our cur­rent plans as­sume a $1,825,000 an­nual bud­get.

About $100,000 of our 2016 bud­get is be­ing paid for via Fu­ture of Life In­sti­tute (FLI) grants, funded by Elon Musk and the Open Philan­thropy Pro­ject. The rest de­pends on our fundraiser and fu­ture grants. We have a twelve-month run­way as of Jan­uary 1, which we would ideally like to ex­tend. Tak­ing all of this into ac­count, our win­ter fund­ing tar­gets are:

Tar­get 1 — $150k: Hold­ing steady. At this level, we would have enough funds to main­tain our run­way in early 2016 while con­tin­u­ing all cur­rent op­er­a­tions, in­clud­ing run­ning work­shops, writ­ing pa­pers, and at­tend­ing con­fer­ences.

Tar­get 2 — $450k: Main­tain­ing MIRI’s growth rate. At this fund­ing level, we would be much more con­fi­dent that our new growth plans are sus­tain­able, and we would be able to de­vote more at­ten­tion to aca­demic out­reach. We would be able to spend less staff time on fundrais­ing in the com­ing year, and might skip our sum­mer fundraiser.

Tar­get 3 — $1M: Big­ger plans, faster growth. At this level, we would be able to sub­stan­tially in­crease our re­cruit­ing efforts and take on new re­search pro­jects. It would be ev­i­dent that our donors’ sup­port is stronger than we thought, and we would move to scale up our plans and growth rate ac­cord­ingly.

Tar­get 4 — $6M: A new MIRI. At this point, MIRI would be­come a qual­i­ta­tively differ­ent or­ga­ni­za­tion. With this level of fund­ing, we would be able to di­ver­sify our re­search ini­ti­a­tives and be­gin branch­ing out from our cur­rent agenda into al­ter­na­tive an­gles of at­tack on the AI al­ign­ment prob­lem.

The rest of this post will go into more de­tail about our mis­sion, what we’ve been up to the past year, and our plans for the fu­ture.

Why MIRI?

The field of AI has a goal of au­tomat­ing per­cep­tion, rea­son­ing, and de­ci­sion-mak­ing — the many abil­ities we lump un­der the la­bel “in­tel­li­gence.” Most lead­ing re­searchers in AI ex­pect our best AI al­gorithms to be­gin strongly out­perform­ing hu­mans this cen­tury in most cog­ni­tive tasks. In spite of this, rel­a­tively lit­tle time and effort has gone into try­ing to iden­tify the tech­ni­cal pre­req­ui­sites for mak­ing smarter-than-hu­man AI sys­tems safe and use­ful.

We be­lieve that sev­eral ba­sic the­o­ret­i­cal ques­tions will need to be an­swered in or­der to make ad­vanced AI sys­tems sta­ble, trans­par­ent, and er­ror-tol­er­ant, and in or­der to spec­ify cor­rect goals for such sys­tems. Our tech­ni­cal agenda de­scribes what we think are the most im­por­tant and tractable of these ques­tions. Smarter-than-hu­man AI may be 50+ years away, but there are a num­ber of rea­sons we con­sider it im­por­tant to be­gin work on these prob­lems early:

  1. High ca­pa­bil­ity ceilings — Hu­mans ap­pear to be nowhere near phys­i­cal limits for cog­ni­tive abil­ity, and even mod­est ad­van­tages in in­tel­li­gence may yield de­ci­sive strate­gic ad­van­tages for AI sys­tems.

  2. “Sorcerer’s Ap­pren­tice” sce­nar­ios — Smarter AI sys­tems can come up with in­creas­ingly cre­ative ways to meet pro­grammed goals. The harder it is to an­ti­ci­pate how a goal will be achieved, the harder it is to spec­ify the cor­rect goal.

  3. Con­ver­gent in­stru­men­tal goals — By de­fault, highly ca­pa­ble de­ci­sion-mak­ers are likely to have in­cen­tives to treat hu­man op­er­a­tors ad­ver­sar­i­ally.

  4. AI speedup effects — Progress in AI is likely to ac­cel­er­ate as AI sys­tems ap­proach hu­man-level profi­ciency in skills like soft­ware en­g­ineer­ing.

We think MIRI is well-po­si­tioned to make progress on these prob­lems for four rea­sons: our ini­tial tech­ni­cal re­sults have been promis­ing (see our pub­li­ca­tions), our method­ol­ogy has a good track record of work­ing in the past (see MIRI’s Ap­proach), we have already had a sig­nifi­cant in­fluence on the de­bate about long-run AI out­comes (see Assess­ing Our Past and Po­ten­tial Im­pact), and we have an ex­clu­sive fo­cus on these is­sues (see What Sets MIRI Apart?). MIRI is cur­rently the only or­ga­ni­za­tion spe­cial­iz­ing in long-term tech­ni­cal AI safety re­search, and our in­de­pen­dence from in­dus­try and academia al­lows us to effec­tively ad­dress gaps in other in­sti­tu­tions’ re­search efforts.

Gen­eral Progress in 2015

The big news from the start of 2015 was FLI’s “Fu­ture of AI” con­fer­ence in San Juan, Puerto Rico, which brought to­gether the lead­ing or­ga­ni­za­tions study­ing long-term AI risk and top AI re­searchers in academia and in­dus­try. Out of the FLI event came a widely en­dorsed open let­ter, ac­com­panied by a re­search pri­ori­ties doc­u­ment draw­ing heav­ily on MIRI’s work. Two promi­nent AI sci­en­tists who helped or­ga­nize the event, Stu­art Rus­sell and Bart Sel­man, have since be­come MIRI re­search ad­vi­sors (in June and July, re­spec­tively). The con­fer­ence also re­sulted in an AI safety grants pro­gram, with MIRI re­ceiv­ing some of the largest grants. (De­tails: An As­tound­ing Year, Grants and Fundraisers.)

In March, we re­leased Eliezer Yud­kowsky’s Ra­tion­al­ity: From AI to Zom­bies; we hired a new full-time re­search fel­low, Pa­trick LaVic­toire; and we launched the In­tel­li­gent Agent Foun­da­tions Fo­rum, a dis­cus­sion fo­rum for AI al­ign­ment re­search. Many of our sub­se­quent pub­li­ca­tions have been de­vel­oped from ma­te­rial on the fo­rum, be­gin­ning with LaVic­toire’s “An in­tro­duc­tion to Löb’s the­o­rem in MIRI’s re­search.”

In May, we co-or­ga­nized a de­ci­sion the­ory con­fer­ence at Cam­bridge Univer­sity and be­gan a sum­mer work­shop se­ries aimed at in­tro­duc­ing a wider com­mu­nity of sci­en­tists and math­e­mat­i­ci­ans to our work. Luke Muehlhauser left MIRI for a re­search po­si­tion at the Open Philan­thropy Pro­ject in June, and I re­placed Luke as MIRI’s Ex­ec­u­tive Direc­tor. Shortly there­after, we ran a three-week sum­mer fel­lows pro­gram with CFAR.

Our July/​Au­gust fund­ing drive ended up rais­ing $631,957 to­tal from 263 dis­tinct donors, top­ping our pre­vi­ous fund­ing drive record by over $200,000. Mid-sized donors stepped up their game in a big way to help us hit our first two fund­ing tar­gets: many more donors gave be­tween $5k and $50k than in past fundraisers. Dur­ing the fundraiser, we also wrote up ex­pla­na­tions of our plans and hired two new re­search fel­lows: Jes­sica Tay­lor and An­drew Critch.

Our fundraiser, sum­mer fel­lows pro­gram, and sum­mer work­shop se­ries all went ex­tremely well, di­rectly en­abling us to make some of our newest re­searcher hires. Our sixth re­search fel­low, Scott Garrabrant, will be join­ing this month, af­ter hav­ing made ma­jor con­tri­bu­tions as a work­shop at­tendee and re­search as­so­ci­ate. Mean­while, our two new re­search in­terns, Kaya Stechly and Ra­fael Cos­man, have been go­ing through old re­sults and con­soli­dat­ing and pol­ish­ing ma­te­rial into new pa­pers; and three of our new re­search as­so­ci­ates, Vadim Kosoy, Abram Dem­ski, and Tsvi Ben­son-Tilsen, have been pro­duc­ing a string of promis­ing re­sults on the re­search fo­rum. (De­tails: An­nounc­ing Our New Team.)

We also spoke at AAAI-15, the Amer­i­can Phys­i­cal So­ciety, AGI-15, EA Global, LORI 2015, and ITIF, and ran a ten-week sem­i­nar se­ries at UC Berkeley. This month, we’ll be mov­ing into a new office space to ac­com­mo­date our grow­ing team. On the whole, I’m very pleased with our new aca­demic col­lab­o­ra­tions, out­reach efforts, and growth.

Re­search Progress in 2015

Our in­creased re­search ca­pac­ity means that we’ve been able to gen­er­ate a num­ber of new tech­ni­cal in­sights. Break­ing these down by topic:

We’ve been ex­plor­ing new ap­proaches to the prob­lems of nat­u­ral­ized in­duc­tion and log­i­cal un­cer­tainty, with early re­sults pub­lished in var­i­ous venues, in­clud­ing Fallen­stein et al.’s “Reflec­tive or­a­cles” (pre­sented in abridged form at LORI 2015) and “Reflec­tive var­i­ants of Solomonoff in­duc­tion and AIXI” (pre­sented at AGI-15), and Garrabrant et al.’s “Asymp­totic log­i­cal un­cer­tainty and the Ben­ford test” (available on arXiv). We also pub­lished the overview pa­pers “For­mal­iz­ing two prob­lems of re­al­is­tic world-mod­els” and “Ques­tions of rea­son­ing un­der log­i­cal un­cer­tainty.”

In de­ci­sion the­ory, Pa­trick LaVic­toire and oth­ers have de­vel­oped new re­sults per­tain­ing to bar­gain­ing and di­vi­sion of trade gains, us­ing the proof-based de­ci­sion the­ory frame­work (ex­am­ple). Mean­while, the team has been de­vel­op­ing a bet­ter un­der­stand­ing of the strengths and limi­ta­tions of differ­ent ap­proaches to de­ci­sion the­ory, an effort spear­headed by Eliezer Yud­kowsky, Benya Fallen­stein, and me, cul­mi­nat­ing in some in­sights that will ap­pear in a pa­per next year. An­drew Critch has proved some promis­ing re­sults about bounded ver­sions of proof-based de­ci­sion-mak­ers, which will also ap­pear in an up­com­ing pa­per. Ad­di­tion­ally, we pre­sented a short­ened ver­sion of our overview pa­per at AGI-15.

In Vingean re­flec­tion, Benya Fallen­stein and Re­search As­so­ci­ate Ra­mana Ku­mar col­lab­o­rated on “Proof-pro­duc­ing re­flec­tion for HOL” (pre­sented at ITP 2015) and have been work­ing on an FLI-funded im­ple­men­ta­tion of re­flec­tive rea­son­ing in the HOL the­o­rem prover. Separately, the re­flec­tive or­a­cle frame­work has helped us gain a bet­ter un­der­stand­ing of what kinds of re­flec­tion are and are not pos­si­ble, yield­ing some nice tech­ni­cal re­sults and a few in­sights that seem promis­ing. We also pub­lished the overview pa­per “Vingean re­flec­tion.”

Jes­sica Tay­lor, Benya Fallen­stein, and Eliezer Yud­kowsky have fo­cused on er­ror tol­er­ance on and off through­out the year. We re­leased Tay­lor’s “Quan­tiliz­ers” (ac­cepted to a work­shop at AAAI-16) and pre­sented “Cor­rigi­bil­ity” at a AAAI-15 work­shop.

In value speci­fi­ca­tion, we pub­lished the AAAI-15 work­shop pa­per “Con­cept learn­ing for safe au­tonomous AI” and the overview pa­per “The value learn­ing prob­lem.” With sup­port from an FLI grant, Jes­sica Tay­lor is work­ing on bet­ter for­mal­iz­ing sub­prob­lems in this area, and has re­cently be­gun writ­ing up her thoughts on this sub­ject on the re­search fo­rum.

Lastly, in fore­cast­ing and strat­egy, we pub­lished “For­mal­iz­ing con­ver­gent in­stru­men­tal goals” (ac­cepted to a AAAI-16 work­shop) and two his­tor­i­cal case stud­ies: “The Asilo­mar Con­fer­ence” and “Leó Szilárd and the Danger of Nu­clear Weapons.”

    We ad­di­tion­ally re­vised our pri­mary tech­ni­cal agenda pa­per for 2016 pub­li­ca­tion.

    Fu­ture Plans

    Our pro­jected spend­ing over the next twelve months, ex­clud­ing ear­marked funds for the in­de­pen­dent AI Im­pacts pro­ject, breaks down as fol­lows:

    Our largest cost ($700,000) is in wages and benefits for ex­ist­ing re­search staff and con­tracted re­searchers, in­clud­ing re­search as­so­ci­ates. Our cur­rent pri­or­ity is to fur­ther ex­pand the team. We ex­pect to spend an ad­di­tional $150,000 on salaries and benefits for new re­search staff in 2016, but that num­ber could go up or down sig­nifi­cantly de­pend­ing on when new re­search fel­lows be­gin work:

    • Mihály Bárász, who was origi­nally slated to be­gin in Novem­ber 2015, has de­layed his start date due to un­ex­pected per­sonal cir­cum­stances. He plans to join the team in 2016.

    • We are re­cruit­ing a spe­cial­ist for our type the­ory in type the­ory pro­ject, which is aimed at de­vel­op­ing sim­ple pro­gram­matic mod­els of re­flec­tive rea­son­ers. In­ter­est in this topic has been in­creas­ing re­cently, which is ex­cit­ing; but the ba­sic tools needed for our work are still miss­ing. If you have pro­gram­mer or math­e­mat­i­cian friends who are in­ter­ested in de­pen­dently typed pro­gram­ming lan­guages and MIRI’s work, you can send them our ap­pli­ca­tion form.

    • We are con­sid­er­ing sev­eral other pos­si­ble ad­di­tions to the re­search team.

    Much of the rest of our bud­get goes into fixed costs that will not need to grow much as we ex­pand the re­search team. This in­cludes $475,000 for ad­minis­tra­tor wages and benefits and $250,000 for costs of do­ing busi­ness. Our main cost here is rent­ing office space (slightly over $100,000).

    Note that the bound­aries be­tween these cat­e­gories are some­times fuzzy. For ex­am­ple, my salary is in­cluded in the ad­min staff cat­e­gory, de­spite the fact that I spend some of my time on tech­ni­cal re­search (and hope to in­crease that amount in 2016).

    Our re­main­ing bud­get goes into or­ga­niz­ing or spon­sor­ing re­search events, such as fel­lows pro­grams, MIRIx events, or work­shops ($250,000). Some ac­tivi­ties (e.g., trav­el­ing to con­fer­ences) are aimed at shar­ing our work with the larger aca­demic com­mu­nity. Others, such as re­searcher re­treats, are fo­cused on solv­ing open prob­lems in our re­search agenda. After ex­per­i­ment­ing with differ­ent types of re­search staff re­treat in 2015, we’re be­gin­ning to set­tle on a model that works well, and we’ll be run­ning a num­ber of re­treats through­out 2016.

    In past years, we’ve gen­er­ally raised $1M per year, and spent a similar amount. How­ever, thanks to sub­stan­tial re­cent in­creases in donor sup­port, we’re in a po­si­tion to scale up sig­nifi­cantly.

    Our donors blew us away with their sup­port in our last fundraiser. If we can con­tinue our fundrais­ing and grant suc­cesses, we will be able to sus­tain our new bud­get and act on the unique op­por­tu­ni­ties out­lined in Why Now Mat­ters, helping set the agenda and build the for­mal tools for the young field of AI safety en­g­ineer­ing. And if our donors keep step­ping up their game, we be­lieve we have the ca­pac­ity to scale up our pro­gram even faster. We’re thrilled at this prospect, and we’re enor­mously grate­ful for your sup­port.

    Donate Now