MIRI Summer Fellows Program

CFAR and MIRI are run­ning our fifth an­nual MIRI Sum­mer Fel­lows Pro­gram (MSFP) in the San Fran­cisco Bay Area from Au­gust 9 to Au­gust 24, 2019.

MSFP is an ex­tended re­treat for math­e­mat­i­ci­ans and pro­gram­mers with a se­ri­ous in­ter­est in mak­ing tech­ni­cal progress on the prob­lem of AI al­ign­ment. It in­cludes an overview of CFAR’s ap­plied ra­tio­nal­ity con­tent, a breadth-first ground­ing in the MIRI per­spec­tive on AI safety, and mul­ti­ple days of ac­tual hands-on re­search with par­ti­ci­pants and MIRI staff at­tempt­ing to make in­roads on open ques­tions.

Pro­gram Description

The in­tent of the pro­gram is to boost par­ti­ci­pants, as far as pos­si­ble, in four over­lap­ping ar­eas:

Do­ing ra­tio­nal­ity in­side a hu­man brain: un­der­stand­ing, with as much fidelity as pos­si­ble, what phe­nom­ena and pro­cesses drive and in­fluence hu­man think­ing and rea­son­ing, so that we can ac­count for our own bi­ases and blindspots, bet­ter re­cruit and use the var­i­ous func­tions of our brains, and, in gen­eral, be less likely to trick our­selves, gloss over our con­fu­sions, or fail to act in al­ign­ment with our en­dorsed val­ues.

Epistemic ra­tio­nal­ity, es­pe­cially the sub­set of skills around de­con­fu­sion. Build­ing the skill of notic­ing where the dots don’t ac­tu­ally con­nect; an­swer­ing the ques­tion “why do we think we know what we think we know?”, par­tic­u­larly when it comes to pre­dic­tions and as­ser­tions around the fu­ture de­vel­op­ment of ar­tifi­cial in­tel­li­gence.

Ground­ing in the cur­rent re­search land­scape sur­round­ing AI: be­ing aware of the pri­mary dis­agree­ments among lead­ers in the field, and the ar­gu­ments for var­i­ous per­spec­tives and claims. Un­der­stand­ing the cur­rent open ques­tions, and why differ­ent ones seem more press­ing or real un­der differ­ent as­sump­tions. Be­ing able to fol­low the rea­son­ing be­hind var­i­ous al­ign­ment schemes/​the­o­ries/​pro­posed in­ter­ven­tions, and be­ing able to eval­u­ate those in­ter­ven­tions with care­ful rea­son­ing and ma­ture (or at least more-ma­ture-than-be­fore) in­tu­itions.

Gen­er­a­tive re­search skill: the abil­ity to make real and rele­vant progress on ques­tions re­lated to the field of AI al­ign­ment with­out los­ing track of one’s own metacog­ni­tion. The par­allel pro­cesses of us­ing one’s men­tal tools, cri­tiquing and im­prov­ing one’s men­tal tools, and mak­ing one’s own progress or de­con­fu­sion available to oth­ers through talks, pa­pers, and mod­els. Any­thing and ev­ery­thing in­volved in be­ing the sort of thinker who can lo­cate a good ques­tion, sniff out promis­ing threads, and col­lab­o­rate effec­tively with oth­ers and with the broader re­search ecosys­tem.

Food and lodg­ing are pro­vided free of charge at CFAR’s work­shop venue in Bodega Bay, Cal­ifor­nia. Par­ti­ci­pants must be able to re­main on­site, largely undis­tracted for the du­ra­tion of the pro­gram (e.g. no ma­jor ap­point­ments in other cities, no large loom­ing aca­demic or pro­fes­sional dead­lines just af­ter the pro­gram).

[4/​19/​19 Up­date: Ap­pli­ca­tions closed on March 31, 2019 and fi­nal­ists were in­ter­viewed by a MIRI staff mem­ber be­tween April 1 and April 17. Ad­mis­sions de­ci­sions — yes, no, wait­list — will go out no later than April 30th.]

If you have any ques­tions or com­ments, please email Colm at the con­tact ad­dress above, or, if you sus­pect oth­ers would also benefit from read­ing the an­swer, post them here.