MIRI’s 2015 Summer Fundraiser!

Our sum­mer fundrais­ing drive is now finished. We raised a grand to­tal of $631,957 from 263 donors. This is an in­cred­ible sum, mak­ing this the biggest fundraiser we’ve ever run.

We’ve already been hard at work grow­ing our re­search team and spin­ning up new pro­jects, and I’m ex­cited to see what our re­search team can do this year. Thank you to all our sup­port­ers for mak­ing our sum­mer fundrais­ing drive so suc­cess­ful!

It’s safe to say that this past year ex­ceeded a lot of peo­ple’s ex­pec­ta­tions.

Twelve months ago, Nick Bostrom’s Su­per­in­tel­li­gence had just come out. Ques­tions about the long-term risks and benefits of smarter-than-hu­man AI sys­tems were nearly in­visi­ble in main­stream dis­cus­sions of AI’s so­cial im­pact.

Twelve months later, we live in a world where Bill Gates is con­fused by why so many re­searchers aren’t us­ing Su­per­in­tel­li­gence as a guide to the ques­tions we should be ask­ing about AI’s fu­ture as a field.

Fol­low­ing a con­fer­ence in Puerto Rico that brought to­gether the lead­ing or­ga­ni­za­tions study­ing long-term AI risk (MIRI, FHI, CSER) and top AI re­searchers in academia (in­clud­ing Stu­art Rus­sell, Tom Mitchell, Bart Sel­man, and the Pres­i­dents of AAAI and IJCAI) and in­dus­try (in­clud­ing rep­re­sen­ta­tives from Google Deep­Mind and Vi­car­i­ous), we’ve seen Elon Musk donate $10M to a grants pro­gram aimed at jump-start­ing the field of long-term AI safety re­search; we’ve seen the top AI and ma­chine learn­ing con­fer­ences (AAAI, IJCAI, and NIPS) an­nounce their first-ever work­shops or dis­cus­sions on AI safety and ethics; and we’ve seen a panel dis­cus­sion on su­per­in­tel­li­gence at ITIF, the lead­ing U.S. sci­ence and tech­nol­ogy think tank. (I pre­sented a pa­per at the AAAI work­shop, I spoke on the ITIF panel, and I’ll be at NIPS.)

As re­searchers be­gin in­ves­ti­gat­ing this area in earnest, MIRI is in an ex­cel­lent po­si­tion, with a de­vel­oped re­search agenda already in hand. If we can scale up as an or­ga­ni­za­tion then we have a unique chance to shape the re­search pri­ori­ties and meth­ods of this new paradigm in AI, and di­rect this mo­men­tum in use­ful di­rec­tions.

This is a big op­por­tu­nity. MIRI is already grow­ing and scal­ing its re­search ac­tivi­ties, but the speed at which we scale in the com­ing months and years de­pends heav­ily on our available funds.

For that rea­son, MIRI is start­ing a six-week fundraiser aimed at in­creas­ing our rate of growth.

Live Progress Bar

Donate Now

This time around, rather than run­ning a match­ing fundraiser with a sin­gle fixed dona­tion tar­get, we’ll be let­ting you help choose MIRI’s course based on the de­tails of our fund­ing situ­a­tion and how we would make use of marginal dol­lars.

In par­tic­u­lar, our plans can scale up in very differ­ent ways de­pend­ing on which of these fund­ing tar­gets we are able to hit:

Tar­get 1 — $250k: Con­tinued growth. At this level, we would have enough funds to main­tain a twelve-month run­way while con­tin­u­ing all cur­rent op­er­a­tions, in­clud­ing run­ning work­shops, writ­ing pa­pers, and at­tend­ing con­fer­ences. We will also be able to scale the re­search team up by one to three ad­di­tional re­searchers, on top of our three cur­rent re­searchers and two new re­searchers who are start­ing this sum­mer. This would en­sure that we have the fund­ing to hire the most promis­ing re­searchers who come out of the MIRI Sum­mer Fel­lows Pro­gram and our sum­mer work­shop se­ries.

Tar­get 2 — $500k: Ac­cel­er­ated growth. At this fund­ing level, we could grow our team more ag­gres­sively, while main­tain­ing a twelve-month run­way. We would have the funds to ex­pand the re­search team to ten core re­searchers, while also tak­ing on a num­ber of ex­cit­ing side-pro­jects, such as hiring one or two type the­o­rists. Re­cruit­ing spe­cial­ists in type the­ory, a field at the in­ter­sec­tion of com­puter sci­ence and math­e­mat­ics, would en­able us to de­velop tools and code that we think are im­por­tant for study­ing ver­ifi­ca­tion and re­flec­tion in ar­tifi­cial rea­son­ers.

Tar­get 3 — $1.5M: Tak­ing MIRI to the next level. At this fund­ing level, we would start reach­ing be­yond the small but ded­i­cated com­mu­nity of math­e­mat­i­ci­ans and com­puter sci­en­tists who are already in­ter­ested in MIRI’s work. We’d hire a re­search stew­ard to spend sig­nifi­cant time re­cruit­ing top math­e­mat­i­ci­ans from around the world, we’d make our job offer­ings more com­pet­i­tive, and we’d fo­cus on hiring highly qual­ified spe­cial­ists in rele­vant ar­eas of math­e­mat­ics. This would al­low us to grow the re­search team as fast as is sus­tain­able, while main­tain­ing a twelve-month run­way.

Tar­get 4 — $3M: Bols­ter­ing our fun­da­men­tals. At this level of fund­ing, we’d start shoring up our ba­sic op­er­a­tions. We’d spend re­sources and ex­per­i­ment to figure out how to build the most effec­tive re­search team we can. We’d up­grade our equip­ment and on­line re­sources. We’d branch out into ad­di­tional high-value pro­jects out­side the scope of our core re­search pro­gram, such as host­ing spe­cial­ized con­fer­ences and re­treats and run­ning pro­gram­ming tour­na­ments to spread in­ter­est about cer­tain open prob­lems. At this level of fund­ing we’d also start ex­tend­ing our run­way, and pre­pare for sus­tained ag­gres­sive growth over the com­ing years.

Tar­get 5 — $6M: A new MIRI. At this point, MIRI would be­come a qual­i­ta­tively differ­ent or­ga­ni­za­tion. With this level of fund­ing, we would be able to be­gin build­ing an en­tirely new AI al­ign­ment re­search team work­ing in par­allel to our cur­rent team, work­ing on differ­ent prob­lems and tak­ing a differ­ent ap­proach. Our cur­rent tech­ni­cal agenda is not the only way to ap­proach the challenges that lie ahead, and we would be thrilled to get the op­por­tu­nity to spark a sec­ond re­search group.

We also have plans that ex­tend be­yond the $6M level: for more in­for­ma­tion, shoot me an email at nate@in­tel­li­gence.org. I also in­vite you to email me with gen­eral ques­tions or to set up a time to chat.

If you in­tend to make use of cor­po­rate match­ing (check here to see whether your em­ployer will match your dona­tion), email malo@in­tel­li­gence.org and we’ll in­clude the match­ing con­tri­bu­tions in the fundraiser to­tal.

Some of these tar­gets are quite am­bi­tious, and I’m ex­cited to see what hap­pens when we lay out the available pos­si­bil­ities and let our donors col­lec­tively de­cide how quickly we de­velop as an or­ga­ni­za­tion.

We’ll be us­ing this fundraiser as an op­por­tu­nity to ex­plain our re­search and our plans for the fu­ture. If you have any ques­tions about what MIRI does and why, email them to rob@in­tel­li­gence.org. An­swers will be posted to the MIRI blog ev­ery Mon­day and Fri­day.

Below is a list of ex­plana­tory posts writ­ten for this fundraiser, which we’ll be up­dat­ing reg­u­larly:

Our hope is that these new re­sources will help you, our sup­port­ers, make more in­formed de­ci­sions dur­ing our fundraiser, and also that our fundraiser will serve as an op­por­tu­nity for you to learn a lot more about our ac­tivi­ties and strate­gic out­look.

This is a crit­i­cal junc­ture for the field of AI al­ign­ment re­search. My ex­pec­ta­tion is that dona­tions to­day will go much fur­ther than dona­tions sev­eral years down the line, and time is of the essence if we want to cap­i­tal­ize on our new op­por­tu­ni­ties.

Your sup­port is a large part of what has put us into the ex­cel­lent po­si­tion that we now oc­cupy. Thank you for helping make this ex­cit­ing mo­ment in MIRI’s de­vel­op­ment pos­si­ble!