Why CFAR?

Sum­mary: We out­line the case for CFAR, in­clud­ing:

CFAR is in the mid­dle of our an­nual match­ing fundraiser right now. If you’ve been think­ing of donat­ing to CFAR, now is the best time to de­cide for prob­a­bly at least half a year. Dona­tions up to $150,000 will be matched un­til Jan­uary 31st; and Matt Wage, who is match­ing the last $50,000 of dona­tions, has vowed not to donate un­less matched.[1]

Our work­shops are cash-flow pos­i­tive, and sub­si­dize our ba­sic op­er­a­tions (you are not sub­si­diz­ing work­shop at­ten­dees). But we can’t yet run work­shops of­ten enough to fully cover our core op­er­a­tions. We also need to do more for­mal ex­per­i­ments, and we want to cre­ate free and low-cost cur­ricu­lum with far broader reach than the cur­rent work­shops. Dona­tions are needed to keep the lights on at CFAR, fund free pro­grams like the Sum­mer Pro­gram on Ap­plied Ra­tion­al­ity and Cog­ni­tion, and let us do new and in­ter­est­ing things in 2014 (see be­low, at length).[2]

Our long-term goal

CFAR’s long-term goal is to cre­ate peo­ple who can and will solve im­por­tant prob­lems—what­ever the im­por­tant prob­lems turn out to be.[3]

We there­fore aim to cre­ate a com­mu­nity with three key prop­er­ties:

  1. Com­pe­tence—The abil­ity to get things done in the real world. For ex­am­ple, the abil­ity to work hard, fol­low through on plans, push past your fears, nav­i­gate so­cial situ­a­tions, or­ga­nize teams of peo­ple, start and run suc­cess­ful busi­nesses, etc.

  2. Epistemic ra­tio­nal­ity—The abil­ity to form rel­a­tively ac­cu­rate be­liefs. Espe­cially the abil­ity to form such be­liefs in cases where data is limited, mo­ti­vated cog­ni­tion is tempt­ing, or the con­ven­tional wis­dom is in­cor­rect.

  3. Do-good­ing—A de­sire to make the world bet­ter for all its peo­ple; the ten­dency to jump in and start/​as­sist pro­jects that might help (whether by la­bor or by dona­tion); and am­bi­tion in keep­ing an eye out for pro­jects that might help a lot and not just a lit­tle.

Why com­pe­tence, epistemic ra­tio­nal­ity, and do-good­ing?
To change the world, we’ll need to be able to take effec­tive ac­tion (com­pe­tence). We’ll need to be able to form a good im­plicit and ex­plicit un­der­stand­ing of the hu­man world and how to shift it. We’ll need to have the best shot we can get at mod­el­ing situ­a­tions yet un­seen. We’ll need to solve prob­lems out­side the realms where com­pe­tent busi­ness peo­ple already find trac­tion (all of which re­quire com­pe­tence plus epistemic ra­tio­nal­ity). And we’ll need to blend these abil­ities with a burn­ing am­bi­tion to leave the world far bet­ter than we found it (com­pe­tence plus epistemic ra­tio­nal­ity plus do-good­ing).
And we’ll need a com­mu­nity, not just a set of in­di­vi­d­u­als. It is hard for an iso­lated in­di­vi­d­ual to figure out what the most im­por­tant prob­lems are, let alone how to effec­tively solve them. This is still harder for in­di­vi­d­u­als who have in­ter­est­ing day jobs, and who are busy amass­ing real-world com­pe­tence of varied sorts. Com­mu­ni­ties can as­sem­ble a com­plex world-model piece by piece. Com­mu­ni­ties can build and sus­tain mo­ti­va­tion, as well, and fa­cil­i­tate the prac­tice and trans­fer of use­ful skills. The aim is thus to cre­ate a com­mu­nity that, col­lec­tively, can figure out what needs do­ing and can then do it—even when this re­quires mul­ti­ple si­mul­ta­neous com­pe­ten­cies (e.g., lo­cat­ing a par­tic­u­lar ex­is­ten­tial risk, and hav­ing good sci­en­tific con­nec­tions, and know­ing good folks in policy, and know­ing how to do good tech­ni­cal re­search).
We in­tend to build that sort of com­mu­nity.

Our plan, and our progress to date

How can we cre­ate a com­mu­nity with high lev­els of com­pe­tence, epistemic ra­tio­nal­ity, and do-good­ing? By cre­at­ing cur­ricula that teach (or en­hance) these prop­er­ties; by seed­ing the com­mu­nity with di­verse com­pe­ten­cies and di­verse per­spec­tives on how to do good; and by link­ing peo­ple to­gether into the right kind of com­mu­nity.

We’ve now had two years to ex­e­cute on this vi­sion.[4] It’s not a lot of time, but it’s enough to get started; and it’s enough that folks should already be able to up­date as to our abil­ity to ex­e­cute.
Here’s our cur­rent work­ing plan, the progress we’ve made so far, and the pieces we still need to hit.

Cur­ricu­lum design

In Oc­to­ber 2012, we had no money and lit­tle visi­ble means of ob­tain­ing more.[5] We needed run­way; and we needed a way to use that run­way to rapidly iter­ate cur­ricu­lum.
We there­fore fo­cused our ini­tial efforts into mak­ing a work­shop that could pay its own bills, and at the same time give us data—a work­shop that would give us the op­por­tu­nity to run (and learn from) many fur­ther work­shops. Our ap­plied ra­tio­nal­ity work­shops have filled this role.

Progress to date

Re­ported benefits
After about a dozen work­shops (and over 100 classes that we’ve de­signed and tested), we’ve set­tled on a work­shop model that runs smoothly, and seems to provide value to our par­ti­ci­pants, who re­port a mean of 9.3 out of 10 to the ques­tion “Are you glad you came?”. In the pro­cess we’ve sub­stan­tially im­proved our skill at cur­ricu­lum de­sign: it used to take us about 40 hours to de­sign a unit we re­garded as de­cent (de­sign; test on vol­un­teers; re-de­sign; test on vol­un­teers; etc). It now takes us about 8 hours to de­sign a unit of the same qual­ity.[6]
Anec­do­tally, we have many, many sto­ries from alumni about how our work­shop in­creased their com­pe­tence (both gen­er­ally and for al­tru­is­tic ends). For ex­am­ple, alum Ben Toner, CEO of Draftable, re­counts that af­ter the July 2012 work­shop, “At work, I re­al­ized I wasn’t do­ing any­where near enough plan­ning. My em­ploy­ees were spend­ing time on the wrong things be­cause I hadn’t planned things out in enough de­tail to make it clear what was the most im­por­tant thing to do next. I fixed this im­me­di­ately af­ter the camp.” Alum Ben Kuhn has de­scribed how the CFAR work­shop helped his effec­tive al­tru­ism group “vastly in­crease our cam­pus pres­ence—ev­ery­thing from mak­ing un­com­fortable cold calls to pow­er­ing through bu­reau­cracy, and from run­ning com­plex events to quickly up­dat­ing on feed­back.” (Check out our tes­ti­mo­ni­als page for more ex­am­ples.)
Measurement
Anec­data notwith­stand­ing, the jury is still out re­gard­ing the work­shops’ use­ful­ness to those who come. Dur­ing the very first mini­camps (the cur­rent work­shops are agreed to be bet­ter) we ran­dom­ized ad­mis­sion of 15 ap­pli­cants, with 17 con­trols. Our study was low-pow­ered and effects on e.g. in­come would have needed to be very large for us to ex­pect to de­tect them. Still, we ended up with non-neg­ligible ev­i­dence of ab­sence: in­come, hap­piness, and ex­er­cise did not visi­bly trend up­ward one year later. We de­tected statis­ti­cally sig­nifi­cant pos­i­tive im­pacts on the stan­dard (BFI-10) sur­vey pair for emo­tional sta­bil­ity “I see my­self as some­one who is re­laxed, han­dles stress well” /​ “I get ner­vous eas­ily” (p=.002). Also sig­nifi­cant were effects on an abridged Gen­eral Self-Effi­cacy Scale (sam­ple item:”I can solve most prob­lems if I in­vest the nec­es­sary effort”) (p=.007). The de­tails will be available soon on our blog (in­clud­ing a much larger num­ber of nega­tive re­sults). We’ll run an­other RCT soon, fund­ing per­mit­ting.
Like many par­ti­ci­pants, we at CFAR have the sub­jec­tive im­pres­sion that the work­shops boost strate­gic­ness; and, like most who have ob­served two work­shops, we have the im­pres­sion that to­day’s work­shops are much bet­ter than those in the ini­tial RCT. We’ll need to find ways to ac­tu­ally test those im­pres­sions, and to cre­ate stronger feed­backs from mea­sure­ment into cur­ricu­lum de­vel­op­ment.
Epistemic ra­tio­nal­ity curricula
After a rocky start, our epistemic ra­tio­nal­ity cur­ricu­lum has seen a num­ber of re­cent vic­to­ries. Our “Build­ing Bayesian Habits” class be­gan perform­ing much bet­ter af­ter we figured out how to help peo­ple no­tice their in­tu­itive, “Sys­tem 1″ ex­pec­ta­tions of prob­a­bil­ities.[7] Our “in­ner simu­la­tor” class con­veys the dis­tinc­tion be­tween pro­fes­sion and an­ti­ci­pa­tion while aiming at im­me­di­ate, prac­ti­cal benefits; it isn’t about re­li­gion and poli­tics, it’s about whether your mother will ac­tu­ally en­joy the pot­ted plant you’re think­ing of giv­ing her. More gen­er­ally, the epistemic ra­tio­nal­ity cur­ricu­lum ap­pears to be in­te­grat­ing deeply with the com­pe­tence cur­ricu­lum, and ap­pears to be be­com­ing more ap­peal­ing to par­ti­ci­pants as it does so. Strength­en­ing this cur­ricu­lum, and build­ing in real tests of its effi­cacy, will be a ma­jor fo­cus in 2014.
In­te­grat­ing with aca­demic research
We made pre­limi­nary efforts in this di­rec­tion—for ex­am­ple by tak­ing stan­dard ques­tion­naires from the aca­demic liter­a­ture, in­clud­ing Stanovich’s in­di­ca­tors of the traits he calls “ra­tio­nal­ity”, and ad­minis­ter­ing them to at­ten­dees at a Less Wrong meetup. (We found that meetup at­ten­dees scored near the ceiling, so we’ll prob­a­bly need new ques­tion­naires with bet­ter dis­crim­i­na­tion.) Our re­search fel­low, Dan Keys (whose mas­ters the­sis was on heuris­tics and bi­ases), spends a ma­jor­ity of his time keep­ing up with the liter­a­ture and in­te­grat­ing it with CFAR work­shops, as well as de­sign­ing tests for our on­go­ing forays into ran­dom­ized con­trol­led tri­als. We’re par­tic­u­larly ex­cited by Tet­lock’s Good Judg­ment Pro­ject, and we’ll be pig­gy­back­ing on it a bit to see if we can get de­cent rat­ings.
Accessibility
Ini­tial work­shops worked only for those who had already read the LW Se­quences. To­day, work­shop par­ti­ci­pants who are smart and an­a­lyt­i­cal, but with no prior ex­po­sure to ra­tio­nal­ity—such as a lo­cal poli­ti­cian, a po­lice officer, a Span­ish teacher, and oth­ers—are by and large quite happy with the work­shop and feel it is valuable.
Nev­er­the­less, the to­tal set of peo­ple who can travel to a 4.5-day im­mer­sive work­shop, and who can spend $3900 to do so, is limited. We want to even­tu­ally give a sub­stan­tial skill-boost in a less ex­pen­sive, more ac­cessible for­mat; we are slowly boot­strap­ping to­ward this.
Speci­fi­cally:
  • Shorter work­shops: We’re work­ing on shorter ver­sions of our work­shops (in­clud­ing three-hour and one-day courses) that can be given to larger sets of peo­ple at lower cost.

  • Col­lege courses: We helped de­velop a course on ra­tio­nal think­ing—for UC Berkeley un­der­grad­u­ates, in part­ner­ship with No­bel Lau­re­ate Saul Per­l­mut­ter. We also brought sev­eral high school and uni­ver­sity in­struc­tors to our work­shop, to help seed early ex­per­i­men­ta­tion into their cur­ricula.

  • In­creas­ing visi­bil­ity: We’ve been work­ing on in­creas­ing our visi­bil­ity among the gen­eral pub­lic, with alumni James Miller and Tim Czech both work­ing on non-fic­tion books that fea­ture CFAR, and sev­eral main­stream me­dia ar­ti­cles about CFAR on their way, in­clud­ing one forth­com­ing shortly in the Wall Street Jour­nal.

    Next steps

    In 2014, we’ll be de­vot­ing more re­sources to epistemic cur­ricu­lum de­vel­op­ment; to re­search mea­sur­ing the effects of our cur­ricu­lum on both com­pe­tence and epistemic ra­tio­nal­ity; and to more widely ac­cessible cur­ricula.

    Forg­ing community

    The most pow­er­ful in­ter­ven­tions are not one-off ex­pe­riences; rather, they are the start of an on­go­ing prac­tice. Chang­ing one’s so­cial en­vi­ron­ment is one of the high­est im­pact ways to cre­ate per­sonal change. Alum Paul Crowley writes that “The most valuable last­ing thing I got out of at­tend­ing, I think, is a re­newed de­ter­mi­na­tion to con­tinu­ally up my game. A big part of that is that the mini­camp cre­ates a last­ing com­mu­nity of fel­low alumni who are also try­ing for the biggest bite of in­creased util­ity they can get, and that’s no ac­ci­dent.”
    The goal is to cre­ate a com­mu­nity that is di­rectly helpful for its mem­bers, and that si­mul­ta­neously im­proves its mem­bers’ im­pact on the world.

    Progress to date

    A strong set of seed alumni
    We have roughly 350 alumni so far, which in­clude sci­en­tists from MIT and Berkeley, col­lege stu­dents, en­g­ineers from Google and Face­book, founders of Y-com­bi­na­tor star­tups, teach­ers, pro­fes­sional writ­ers, and the ex­cep­tion­ally gifted high-school stu­dents who par­ti­ci­pated in SPARC 2013 and 2012. (Not counted in that tally are the 50-some at­ten­dees of the 2013 Effec­tive Altru­ism Sum­mit, for whom we ran a free, abridged ver­sion of our work­shop.)
    Alumni con­tact/​community
    There is an ac­tive alumni Google group, which gets daily traf­fic. Alumni use it to share use­ful life hacks they’ve dis­cov­ered, help each other trou­ble-shoot, and no­tify each other of up­com­ing events and op­por­tu­ni­ties. We’ve also been us­ing our post-work­shop par­ties as re­unions for alumni nearby (in the San Fran­cisco Bay area, the New York City area, and—in two months—Melbourne, Aus­tralia).
    In large part thanks to our alumni fo­rum and the post-work­shop party net­work­ing, there have already been nu­mer­ous cases of alumni helping each other find jobs and col­lab­o­rat­ing on star­tups or other pro­jects. There have also been sev­eral alumni re­cruited to do-good­ing pro­jects (e.g., MIRI and Lev­er­age Re­search have en­gaged mul­ti­ple alumni), and of alumni im­prov­ing their “earn to give” abil­ity or shift­ing their own do-good­ing strat­egy.
    Many alumni also take CFAR skills back to Less Wrong meet-ups or other lo­cal com­mu­ni­ties (for ex­am­ple, the effec­tive-al­tru­ism meetup in Melbourne, a home­less youth shelter in Ore­gon, and a self-im­prove­ment group in NYC; many have also prac­ticed in their start-ups and with co-work­ers (for ex­am­ple, Bee­minder, Me­taMed, and Aquahug)).
    Do-good­ing diversity
    We’d like the alumni com­mu­nity to have an ac­cu­rate pic­ture of how to effec­tively im­prove the world. We don’t want to try to figure out how to im­prove the world all from scratch. There are already a num­ber of groups who’ve done a lot of good think­ing on the sub­ject; in­clud­ing some who call them­selves “effec­tive al­tru­ists”, but also peo­ple who call them­selves “so­cial en­trepreneurs”, “x-risk min­i­miz­ers”, and “philan­thropic foun­da­tions”.
    We aim to bring in the best thinkers and do­ers from all of these groups to seed the com­mu­nity with di­verse good ideas on the sub­ject. The goal is to cre­ate a cul­ture rich enough that the alumni, as a com­mu­nity, can over­come any er­rors in CFAR’s founders’ per­spec­tives. The goal is also to cre­ate a com­mu­nity that is defined by its pur­suit of true be­liefs, and that is not defined by any par­tic­u­lar pre­con­cep­tions as to what those be­liefs are.
    We use ap­pli­cants’ in­cli­na­tion to do good as a ma­jor crite­rion of fi­nan­cial aid. Re­cip­i­ents of our in­for­mally-dubbed “al­tru­ism schol­ar­ships” have in­cluded mem­bers of the Fu­ture of Hu­man­ity In­sti­tute, CEA, Giv­ing What We Can, MIRI, and Lev­er­age Re­search. They also in­clude many col­lege or grad­u­ate stu­dents who have no offi­cial EA af­fili­a­tion, but who are pas­sion­ate about their de­sire to de­vote their ca­reer to world-sav­ing (and who hope the work­shops can help them figure out how to do so). And they in­clude folks who are work­ing full-time on varied do-good­ing pro­jects of broader ori­gin, such as so­cial en­trepreneurs, some­one work­ing on com­mu­nity polic­ing, and folks work­ing at a ma­jor philan­thropic foun­da­tion.
    In­ter­na­tional outreach
    We’ll be run­ning our first in­ter­na­tional work­shop in Aus­tralia, in Fe­bru­ary 2014, thanks to alumni Matt and An­drew Fal­lshaw.
    Also, start­ing in 2014, we’ll be bring­ing about 20 Es­to­nian math and sci­ence award-win­ners per year to CFAR work­shops, thanks to a 5-year pledge from Jaan Tal­linn to spon­sor work­shop spots for lead­ing stu­dents from his home coun­try. Es­to­nia is an EU mem­ber coun­try with a pop­u­la­tion of 1.2 mil­lion and a high-tech­nol­ogy econ­omy, and go­ing for­ward this might be the first op­por­tu­nity to check whether there are net­work effects in rel­a­tively larger frac­tions of a stra­tum.

    Next steps

    Over 2014, a ma­jor fo­cus will be im­prov­ing op­por­tu­ni­ties for on­go­ing alumni in­volve­ment. If fund­ing al­lows, we’ll also try our hand at pi­lot ac­tivi­ties for meet-ups.
    Spe­cific plans in­clude:
    • A two-day “Epistemic Ra­tion­al­ity and EA” mini-work­shop in Jan­uary, tar­geted at alumni

    • An alumni re­union this sum­mer (which will be a multi-day event draw­ing folks our en­tire wor­ld­wide alumni com­mu­nity, un­like the alumni par­ties at each work­shop);

    • An alumni di­rec­tory, as an at­tempt to in­crease busi­ness and philan­thropic part­ner­ships among alumni.

    Financials

    Expenses

    Our fixed ex­penses come to about $40k per month. In some de­tail:
    • About $7k for our office space

    • About $3k for mis­cel­la­neous expenses

    • About $30k for salary & wages, go­ing for­ward

      • We have five full-time peo­ple on salary, each get­ting $3.5k per month gross. The em­ployer por­tion of taxes adds roughly an ad­di­tional $1k/​month per em­ployee.

      • The re­main­ing $7k or so goes to hourly em­ploy­ees and con­trac­tors. We have two roughly full-time hourly em­ploy­ees, and a few con­trac­tors who do web­site ad­just­ment and main­te­nance, work­book com­pila­tion for a work­shop, and similarly tar­geted tasks.

    In ad­di­tion to our fixed ex­penses, we chose to run SPARC 2013, even though it would cause us to run out of money right around the end-of-year fundrais­ing drive. We did so be­cause we judged SPARC to be po­ten­tially very im­por­tant[8], enough to jus­tify the risk of lean­ing on this win­ter fundraiser to con­tinue. All told, SPARC cost ap­prox­i­mately $50k in di­rect costs (not count­ing staff time).
    (We also chose to e.g. teach at the EA Sum­mit, do ra­tio­nal­ity re­search, put some effort into cur­ricula that can be de­liv­ered cheaply to a larger crowd, etc. Th­ese did not in­cur much di­rect ex­pense, but did re­quire staff time which could oth­er­wise have been di­rected to­wards rev­enue-pro­duc­ing pro­jects.)

    Revenue

    Work­shops are our pri­mary source of non-dona­tion in­come. We ran 7 of them in 2013, and they be­came in­creas­ingly cash-pos­i­tive through the year. We now ex­pect a full 4-day work­shop held in the Bay Area to give us a profit of about $25k (ig­nor­ing fixed costs, such as staff time and office rent), which is just un­der 3 weeks of CFAR run­way. De­mand isn’t yet re­li­able enough to let us run them at that fre­quency. We’ve made sig­nifi­cant trac­tion on build­ing in­ter­est out­side of the Less Wrong com­mu­nity, but there’s still work to be done here, and that work will take time. In the mean­time, work­shops can sub­si­dize some of our non-work­shop ac­tivi­ties, but not all of them. (Your dona­tions do not go to sub­si­dize work­shops!)
    We’re also ac­tively ex­plor­ing rev­enue mod­els other than the four-day work­shop. Sev­eral of them look promis­ing, but need time to come to fruition be­fore the in­come they offer us is rele­vant.

    Donations

    CFAR re­ceived $166k in our pre­vi­ous fundrais­ing drive at the start of 2013, and a smaller amount of dona­tions spread across the rest of the year. SPARC was par­tially spon­sored with $15k from Drop­box and $5k from Quixey. Th­ese dona­tions sub­si­dized SPARC, the ra­tio­nal­ity work­shop at the EA sum­mit, re­search and de­vel­op­ment, and core ex­penses and salary.

    Sav­ings and debt

    Right now CFAR has es­sen­tially no sav­ings. The sav­ings we ac­cu­mu­lated by the end of 2012 went to (a) feed­ing the gap be­tween in­come and ex­penses and (b) fund­ing SPARC.
    A $30k loan, which helped us cover core 2013 ex­penses, comes due in March 2014.

    Summary

    If this win­ter fundraiser goes well, it will give us time to make some of our cur­rent ex­per­i­men­tal prod­ucts ma­ture. We think we have an ex­cel­lent shot at mak­ing ma­jor strides for­ward in CFAR’s mis­sion as well as be­com­ing much more self-sus­tain­ing dur­ing 2014.
    If this win­ter fundraiser goes poorly, CFAR will not yet have suffi­cient fund­ing to con­tinue core op­er­a­tions.

    How you can help

    Our main goals in 2014:

    1. Build­ing a scal­able rev­enue base, in­clud­ing via ramp­ing up our work­shop qual­ity, work­shop va­ri­ety, and our mar­ket­ing reach.

    2. Com­mu­nity-build­ing, in­clud­ing an alumni re­union.

    3. Creat­ing more con­nec­tions with the effec­tive al­tru­ism com­mu­nity, and other op­por­tu­ni­ties for our alumni to get in­volved in do-good­ing.

    4. Re­search to feed back into our cur­ricu­lum—on the effec­tive­ness of par­tic­u­lar ra­tio­nal­ity tech­niques, as well as the long-term im­pact of ra­tio­nal­ity train­ing on mean­ingful life out­comes.

    5. Devel­op­ing more classes on epistemic ra­tio­nal­ity.

    The three most im­por­tant ways you can help:
    1. Donations
    If you’re con­sid­er­ing donat­ing but want to learn more about how CFAR uses money, or you have other ques­tions or hes­i­ta­tions, let us know—we’d be more than happy to chat with you via Skype. You can sign up for a one-on-one call with Anna here.
    2. Talent
    We’re ac­tively seek­ing a new di­rec­tor of op­er­a­tions to or­ga­nize our work­shops; good op­er­a­tions can be a great mul­ti­plier on CFAR’s to­tal abil­ity to get things done. We are con­tin­u­ing to try out ex­cep­tional can­di­dates for a cur­ricu­lum de­signer.[9] And we always need more vol­un­teers to help out with alpha-test­ing new classes in Berkeley, and to par­ti­ci­pate in on­line ex­per­i­ments.
    3. Participants
    We’re con­tinu­ally search­ing for ad­di­tional awe­some peo­ple for our work­shops. This re­ally is a high-im­pact way peo­ple can help us; and we do have a large amount of data sug­gest­ing that (you /​your friends) will be glad to have come. You can ap­ply here—it takes 1 minute, and leads to a con­ver­sa­tion with Anna or Kenzi, which (you’ll /​ they’ll) prob­a­bly find in­ter­est­ing whether or not they choose to come.
    Like the open-source move­ment, ap­plied ra­tio­nal­ity will be the product of thou­sands of in­di­vi­d­u­als’ con­tri­bu­tions. The ideas we’ve come up with so far are only a be­gin­ning. If you have other sug­ges­tions for peo­ple we should meet, other work­shops we should at­tend, ways to branch out from our cur­rent busi­ness model, or any­thing else—get in touch, we’d love to Skype with you.
    You can also be a part of open-source ap­plied ra­tio­nal­ity by cre­at­ing good con­tent for Less Wrong. Some of our best work­shop par­ti­ci­pants, vol­un­teers, hires, ideas for ra­tio­nal­ity tech­niques, use cases, and gen­eral in­spira­tion have come from Less Wrong. Help keep the LW com­mu­nity vibrant and grow­ing.
    And, if you’re will­ing—do con­sider donat­ing now.

    Footnotes

    [1] That is: by giv­ing up a dol­lar, you can, given some sim­plifi­ca­tions, cause CFAR to gain two dol­lars. Much thanks to Matt Wage, Peter McCluskey, Ben­jamin Hoff­man, Janos Kra­mar & Vic­to­ria Krakovna, Liron Shapira, Satvik Beri, Kevin Har­ring­ton, Jonathan Weiss­man, and Ted Suz­man for to­gether putting up $150k in match­ing funds. (Matt Wage, as men­tioned, promises not only that he will donate if the pledge is matched, but also that he won’t donate the $50k of match­ing funds to CFAR if the pledge isn’t filled—so your dona­tion prob­a­bly re­ally does cause match­ing at the mar­gin.)
    [2] This post was re­sult of a col­lab­o­ra­tive effort be­tween Anna Sala­mon, Kenzi Amodei, Ju­lia Galef, and “Valen­tine” Michael Smith—like many of our en­deav­ors at CFAR, it went through many iter­a­tions, in many hands, to cre­ate an over­all whole where the credit due is difficult to tease apart.
    [3] In the broad­est sense, CFAR can be seen as a cog­ni­tive branch of effec­tive al­tru­ism—mak­ing a marginal im­prove­ment to think­ing where think­ing mat­ters a lot. MIRI did not gain trac­tion un­til it be­gan to in­clude ex­plicit ra­tio­nal­ity in its mes­sage—maybe be­cause think­ing about AI puts heavy loads on par­tic­u­lar cog­ni­tive skills, though there are other hy­pothe­ses. Other branches of effec­tive al­tru­ism may en­counter their own prob­lems with a heavy cog­ni­tive load. Effec­tive al­tru­ism is limited in its growth by the sup­ply of com­pe­tent peo­ple who want to quan­tify the amount of good they do.
    It has been true over the course of hu­man his­tory that im­prove­ments in world welfare have of­ten been tied to im­prove­ments in ex­plicit think­ing skills, most no­tably with the in­ven­tion of sci­ence. Even for some­one who doesn’t think that ex­is­ten­tial risk is the right place to look, try­ing to in­vest more in good rea­son­ing, qua good rea­son­ing—dou­bling down on the huge benefits which ex­plicit cog­ni­tive skills have already brought hu­man­ity—is a plau­si­ble can­di­date for the high­est-im­pact marginal al­tru­ism.
    [4] That is, we’ve had two years since our barest be­gin­nings, when Anna, Ju­lia, and Val be­gan work­ing to­gether un­der the aus­pices of MIRI; and just over a year as a fi­nan­cially and legally in­de­pen­dent or­ga­ni­za­tion.
    [5] Our pi­lot mini­camps, prior to that Oc­to­ber, gave us valuable data/​iter­a­tion; but they did not pay for their own di­rect (room and board) costs, let alone for the staff time re­quired.
    [6] I’m es­ti­mat­ing qual­ity by work­shop par­ti­ci­pants’ feed­back, here; it takes many fewer hours now for our in­struc­tors to cre­ate units that re­ceive the same par­ti­ci­pant rat­ings as some older unit that hasn’t been re­vised (we did this ac­ci­den­tal ex­per­i­ment sev­eral times). Un­sur­pris­ingly, large quan­tities of unit-de­sign prac­tice, with rapid iter­a­tion and feed­back, were key to im­prov­ing our cur­ricu­lum de­sign skills.
    [7] In­ter­est­ingly, we threw away over a dozen ver­sions of the Bayes class be­fore we de­vel­oped this one. It has proven some­what eas­ier to cre­ate cur­ricula around strate­gic­ness, and around pro­duc­tivity/​effec­tive­ness more gen­er­ally, than around epistemic ra­tio­nal­ity. The rea­son for the rel­a­tive difficulty ap­pears to be two-fold. First, it is some­what harder to cre­ate a felt need for epistemic ra­tio­nal­ity skills, at least among those who aren’t work­ing on gnarly, data-sparse prob­lems such as ex­is­ten­tial risk. Se­cond, there is more ex­ist­ing ma­te­rial on strate­gic­ness than on epistemic ra­tio­nal­ity; and it is in gen­eral harder to cre­ate from scratch than to cre­ate with bor­row­ing. Nev­er­the­less, we have, via much iter­a­tion, had some sig­nifi­cant suc­cesses, in­clud­ing Bayes, sep­a­rat­ing pro­fessed be­liefs from an­ti­ci­pated ones, and with cer­tain sub­skills of avoid­ing mo­ti­vated cog­ni­tion (e.g. notic­ing cu­ri­os­ity; notic­ing and tun­ing in to men­tal flinches). Bet­ter yet, there seems to be a pat­tern to these suc­cesses which we are grad­u­ally get­ting the hang of.
    We’re ex­cited that Ben Hoff­man has pledged $23k of fund­ing speci­fi­cally to en­able us to im­prove our epistemic ra­tio­nal­ity cur­ricu­lum and our re­search plan.
    [8] From the per­spec­tive of long-term, high-im­pact al­tru­ism, highly math-tal­ented peo­ple are es­pe­cially worth im­pact­ing for a num­ber of rea­sons. For one thing, if AI does turn out to pose sig­nifi­cant risks over the com­ing cen­tury, there’s a sig­nifi­cant chance that at least one key figure in the even­tual de­vel­op­ment of AI will have had amaz­ing math tests in high school, judg­ing from the his­tory of past such achieve­ments. An even­tual scaled-up SPARC pro­gram, in­clud­ing math tal­ent from all over the world, may be able to help that un­known fu­ture sci­en­tist build the com­pe­ten­cies he or she will need to nav­i­gate that situ­a­tion well.
    More broadly, math tal­ent may be rele­vant to other tech­nolog­i­cal break­throughs over the com­ing cen­tury; and tech shifts have his­tor­i­cally im­pacted hu­man well-be­ing quite a lot rel­a­tive to the poli­ti­cal is­sues of any given day.
    [9] To those who’ve already ap­plied: Thanks very much for ap­ply­ing; and our apolo­gies for not get­ting back to you so far. If the fund­ing drive is filled (so that we can af­ford to pos­si­bly hire some­one new), we’ll be look­ing through the ap­pli­ca­tions shortly af­ter the drive com­pletes and will get back to you then.