CFAR-run MIRI Summer Fellows program: July 7-26

CFAR will be run­ning a three week sum­mer pro­gram this July for MIRI, de­signed to in­crease par­ti­ci­pants’ abil­ity to do tech­ni­cal re­search into the su­per­in­tel­li­gence al­ign­ment prob­lem.

The in­tent of the pro­gram is to boost par­ti­ci­pants as far as pos­si­ble in four skills:

  1. The CFAR “ap­plied ra­tio­nal­ity” skil­lset, in­clud­ing both what is taught at our in­tro work­shops, and more ad­vanced ma­te­rial from our alumni work­shops;

  2. “Epistemic ra­tio­nal­ity as ap­plied to the foun­da­tions of AI, and other philo­soph­i­cally tricky prob­lems”—i.e., the skil­lset taught in the core LW Se­quences. (E.g.: re­duc­tion­ism; how to rea­son in con­texts as con­fus­ing as an­throp­ics with­out get­ting lost in words.)

  3. The long-term im­pacts of AI, and strate­gies for in­ter­ven­ing (e.g., the con­tent dis­cussed in Nick Bostrom’s book Su­per­in­tel­li­gence).

  4. The ba­sics of AI safety-rele­vant tech­ni­cal re­search. (De­ci­sion the­ory, an­throp­ics, and similar; with folks try­ing their hand at do­ing ac­tual re­search, and re­flect­ing also on the cog­ni­tive habits in­volved.)

The pro­gram will be offered free to in­vited par­ti­ci­pants, and par­tial or full schol­ar­ships for travel ex­penses will be offered to those with ex­cep­tional fi­nan­cial need.

If you’re in­ter­ested (or pos­si­bly-in­ter­ested), sign up for an ad­mis­sions in­ter­view ASAP at this link (takes 2 min­utes): http://​​ra­tio­nal­ity.org/​​miri-sum­mer-fel­lows-2015/​​

Also, please for­ward this post, or the page it­self, to any­one you think should come; the skills and tal­ent that hu­man­ity brings to bear on the su­per­in­tel­li­gence al­ign­ment prob­lem may de­ter­mine our skill at nav­i­gat­ing it, and shar­ing this op­por­tu­nity with good po­ten­tial con­trib­u­tors may be a high-lev­er­age way to in­crease that tal­ent.