Taking the reins at MIRI

Hi all. In a few hours I’ll be tak­ing over as ex­ec­u­tive di­rec­tor at MIRI. The LessWrong com­mu­nity has played a key role in MIRI’s his­tory, and I hope to re­tain and build your sup­port as (with more and more peo­ple join­ing the global con­ver­sa­tion about long-term AI risks & benefits) MIRI moves to­wards the main­stream.

Below I’ve cross-posted my in­tro­duc­tory post on the MIRI blog, which went live a few hours ago. The short ver­sion is: there are very ex­cit­ing times ahead, and I’m hon­ored to be here. Many of you already know me in per­son or through my blog posts, but for those of you who want to get to know me bet­ter, I’ll be run­ning an AMA on the effec­tive al­tru­ism fo­rum at 3PM Pa­cific on Thurs­day June 11th.

I ex­tend to all of you my thanks and ap­pre­ci­a­tion for the sup­port that so many mem­bers of this com­mu­nity have given to MIRI through­out the years.


Hello, I’m Nate Soares, and I’m pleased to be tak­ing the reins at MIRI on Mon­day morn­ing.

For those who don’t know me, I’ve been a re­search fel­low at MIRI for a lit­tle over a year now. I at­tended my first MIRI work­shop in De­cem­ber of 2013 while I was still work­ing at Google, and was offered a job soon af­ter. Over the last year, I wrote a dozen pa­pers, half as pri­mary au­thor. Six of those pa­pers were writ­ten for the MIRI tech­ni­cal agenda, which we com­piled in prepa­ra­tion for the Puerto Rico con­fer­ence put on by the FLI in Jan­uary 2015. Our tech­ni­cal agenda is cited ex­ten­sively in the re­search pri­ori­ties doc­u­ment refer­enced by the open let­ter that came out of that con­fer­ence. In ad­di­tion to the Puerto Rico con­fer­ence, I at­tended five other con­fer­ences over the course of the year, and gave a talk at three of them. I also put to­gether the MIRI re­search guide (a re­source for stu­dents in­ter­ested in get­ting in­volved with AI al­ign­ment re­search), and of course I spent a fair bit of time do­ing the ac­tual re­search at work­shops, at re­searcher re­treats, and on my own. It’s been a jam-packed year, and it’s been loads of fun.

I’ve always had a nat­u­ral in­cli­na­tion to­wards lead­er­ship: in the past, I’ve led a F.I.R.S.T. Robotics team, man­aged two vol­un­teer the­aters, served as pres­i­dent of an En­trepreneur’s Club, and co-founded a startup or two. How­ever, this is the first time I’ve taken a pro­fes­sional lead­er­ship role, and I’m grate­ful that I’ll be able to call upon the ex­pe­rience and ex­per­tise of the board, of our ad­vi­sors, and of out­go­ing ex­ec­u­tive di­rec­tor Luke Muehlhauser.

MIRI has im­proved greatly un­der Luke’s guidance these last few years, and I’m hon­ored to have the op­por­tu­nity to con­tinue that trend. I’ve spent a lot of time in con­ver­sa­tion with Luke over the past few weeks, and he’ll re­main a close ad­vi­sor go­ing for­ward. He and the man­age­ment team have spent the last year or so re­ally tight­en­ing up the day-to-day op­er­a­tions at MIRI, and I’m ex­cited about all the op­por­tu­ni­ties we have open to us now.

The last year has been pretty in­cred­ible. Dis­cus­sion of long-term AI risks and benefits has fi­nally hit the main­stream, thanks to the suc­cess of Bostrom’s Su­per­in­tel­li­gence and FLI’s Puerto Rico con­fer­ence, and due in no small part to years of move­ment-build­ing and effort made pos­si­ble by MIRI’s sup­port­ers. Over the last year, I’ve forged close con­nec­tions with our friends at the Fu­ture of Hu­man­ity In­sti­tute, the Fu­ture of Life In­sti­tute, and the Cen­tre for the Study of Ex­is­ten­tial Risk, as well as with a num­ber of in­dus­try teams and aca­demic groups who are fo­cused on long-term AI re­search. I’m look­ing for­ward to our con­tinued par­ti­ci­pa­tion in the global con­ver­sa­tion about the fu­ture of AI. Th­ese are ex­cit­ing times in our field, and MIRI is well-poised to grow and ex­pand. In­deed, one of my top pri­ori­ties as ex­ec­u­tive di­rec­tor is to grow the re­search team.

That pro­ject is already well un­der way. I’m pleased to an­nounce that Jes­sica Tay­lor has ac­cepted a full-time po­si­tion as a MIRI re­searcher start­ing in Au­gust 2015. We are also host­ing a se­ries of sum­mer work­shops fo­cused on var­i­ous tech­ni­cal AI al­ign­ment prob­lems, the sec­ond of which is just now con­clud­ing. Ad­di­tion­ally, we are work­ing with the Cen­ter for Ap­plied Ra­tion­al­ity to put on a sum­mer fel­lows pro­gram de­signed for peo­ple in­ter­ested in gain­ing the skills needed for re­search in the field of AI al­ign­ment.

I want to take a mo­ment to ex­tend my heart­felt thanks to all those sup­port­ers of MIRI who have brought us to where we are to­day: We have a slew of op­por­tu­ni­ties be­fore us, and it’s all thanks to your effort and sup­port these past years. MIRI couldn’t have made it as far as it has with­out you. Ex­cit­ing times are ahead, and your con­tinued sup­port will al­low us to grow quickly and pur­sue all the op­por­tu­ni­ties that the last year opened up.

Fi­nally, in case you want to get to know me a lit­tle bet­ter, I’ll be an­swer­ing ques­tions on the effec­tive al­tru­ism fo­rum at 3PM Pa­cific time on Thurs­day June 11th.

On­wards,