We run the Center for Applied Rationality, AMA

CFAR re­cently launched its 2019 fundraiser, and to co­in­cide with that, we wanted to give folks a chance to ask us about our mis­sion, plans, and strat­egy. Ask any ques­tions you like; we’ll re­spond to as many as we can from 10am PST on 1220 un­til 10am PST the fol­low­ing day (12/​21).

Topics that may be in­ter­est­ing in­clude (but are not limited to):

  • Why we think there should be a CFAR;

  • Whether we should change our name to be less gen­eral;

  • How run­ning main­line CFAR work­shops does/​doesn’t re­late to run­ning “AI Risk for Com­puter Scien­tist” type work­shops. Why we both do a lot of re­cruit­ing/​ed­u­ca­tion for AI al­ign­ment re­search and wouldn’t be happy do­ing only that.

  • How our cur­ricu­lum has evolved. How it re­lates to and differs from the Less Wrong Se­quences. Where we hope to go with our cur­ricu­lum over the next year, and why.

Sev­eral CFAR staff mem­bers will be an­swer­ing ques­tions, in­clud­ing: me, Tim Tel­leen-Law­ton, Adam Scholl, and prob­a­bly var­i­ous oth­ers who work at CFAR. How­ever, we will try to an­swer with our own in­di­vi­d­ual views (be­cause in­di­vi­d­ual speech is of­ten more in­ter­est­ing than in­sti­tu­tional speech, and cer­tainly eas­ier to do in a non-bu­reau­cratic way on the fly), and we may give more than one an­swer to ques­tions where our in­di­vi­d­ual view­points differ from one an­other’s!

(You might also want to check out our 2019 Progress Re­port and Fu­ture Plans. And we’ll have some other posts out across the re­main­der of the fundraiser, from now til Jan 10.)

[Edit: We’re out of time, and we’ve al­lo­cated most of the re­ply-en­ergy we have for now, but some of us are likely to con­tinue slowly drib­bling out an­swers from now til Jan 2 or so (maybe es­pe­cially to replies, but also to some of the q’s that we didn’t get to yet). Thanks to ev­ery­one who par­ti­ci­pated; I re­ally ap­pre­ci­ate it.]