Why CFAR? The view from 2015

Fol­low-up to: 2013 and 2014.

In this post, we:

We are in the mid­dle of our match­ing fundraiser; so if you’ve been con­sid­er­ing donat­ing to CFAR this year, now is an un­usu­ally good time.

CFAR’s mis­sion, and why that mis­sion mat­ters today

CFAR’s mis­sion is to help peo­ple de­velop the abil­ities that let them mean­ingfully as­sist with the world’s most im­por­tant prob­lems, by im­prov­ing their abil­ity to ar­rive at ac­cu­rate be­liefs, act effec­tively in the real world, and sus­tain­ably care about that world.

We know this is an au­da­cious thing to try—es­pe­cially the “abil­ity to form ac­cu­rate be­liefs” part—but it seems to us that such at­tempts work some­times any­how. Eliezer’s Se­quences seem to offer prin­ci­pled im­prove­ments to some as­pects of some peo­ples’ world-mod­el­ing skill (via syn­the­siz­ing much re­cent cog­ni­tive sci­ence, prob­a­bil­ity the­ory, etc.); this seems to us to be a use­ful point from which to build.

The fact re­mains that we do not yet have the tal­ent nec­es­sary to win—to see the world’s prob­lems clearly, plot strate­gies that have a shot at work­ing, up­date when those strate­gies don’t work, and plan effec­tively around un­knowns. To avoid any great filters that may be lurk­ing, solve global and even as­tro­nom­i­cal challenges, and cre­ate a flour­ish­ing world for all.

Ar­guably, peo­ple of the cal­iber we’re shoot­ing for don’t ex­ist yet, but even if they do, it seems clear that we don’t have enough of them to have enough of a guaran­tee of ac­tu­ally suc­ceed­ing.

So, au­da­cious or not, this is a task that needs to be done, and CFAR is our at­tempt to do it. If we can widen the bot­tle­neck on think­ing bet­ter and do­ing more, we’re in­creas­ing the odds of a bet­ter fu­ture re­gard­less of what the im­por­tant prob­lems turn out to be.

Our progress to date

By the end of 2014, CFAR had cre­ated work­shops that par­ti­ci­pants liked a lot and which ev­i­dence sug­gests had con­crete benefits for them. How­ever, our mis­sion re­mains to im­pact the world. The ques­tion be­came whether we could adapt our work­shops into some­thing that had the po­ten­tial for large im­pact.

Our cen­tral goal for 2015 was there­fore to cre­ate what we called a “min­i­mum strate­gic product”—a product that, as we put it last year, would “more di­rectly jus­tify CFAR’s claim to be an effec­tive al­tru­ist pro­ject” by demon­strat­ing that we could some­times im­prove peo­ples’ think­ing skill, com­pe­tence, and/​or do-good­ing to the point where they were able to en­gage in di­rect work on a key tal­ent-limited task.

Run­ning the MIRI Sum­mer Fel­lows pro­gram gave the op­por­tu­nity we’d sought to try our hand at cre­at­ing such di­rect im­pact. Our plan was to test and de­velop our cur­ricu­lum and train­ing meth­ods through run­ning a train­ing pro­gram that would not only im­prove peo­ple’s abil­ity to think about some of the big ques­tions, but also do so in a fash­ion that could lead to im­me­di­ate progress.

How did we do? Here’s what Nate Soares, MIRI’s Ex­ec­u­tive Direc­tor, had to say:

“MSFP was a re­sound­ing suc­cess: many par­ti­ci­pants gained new skills rele­vant to al­ign­ment re­search, and the pro­gram led di­rectly to mul­ti­ple MIRI hires. The world needs more tal­ented peo­ple fo­cus­ing on big im­por­tant prob­lems, and CFAR has figured out how to de­velop those sorts of tal­ents in prac­tice.”

While work­ing to help cre­ate AI al­ign­ment re­searchers, we also found that this fo­cus on how to be­come a bet­ter sci­en­tist led us into more fruit­ful ter­ri­tory for im­prov­ing our un­der­stand­ing of the art. (If you’re cu­ri­ous, you can see a highly in­co­her­ent ver­sion of some of the skills we tried to get across in this work­ing doc­u­ment. Read be­low for more de­tails about art cre­ation, and our plans to ex­pand on more tar­geted train­ing pro­grams.)

Last year’s “goals for 2015”

We hit some of our con­crete goals for 2015 and got dis­tracted from oth­ers (partly, per­ils of unan­ti­ci­pated op­por­tu­ni­ties :-/​).

We cre­ated a pro­vi­sional met­ric for par­ti­ci­pants’ be­fore-and-af­ter strate­gic use­ful­ness, hit­ting the first goal; we started track­ing that met­ric, hit­ting the sec­ond goal. Then we found that the met­ric was too un­wieldy and too in­ter­per­son­ally tricky to reg­u­larly use on par­ti­ci­pants, mak­ing this “hit­ting” of our “goals” some­what less use­ful than we had hoped. (On the up­side, we learned some­thing about how not to build met­rics. :-/​)

We then got the op­por­tu­nity to run MIRI Sum­mer Fel­lows, as noted above… and mostly dropped our pre­vi­ously de­clared goals to pull off the pro­gram, partly be­cause our goals had been meant as a con­cretiza­tion of “can we train peo­ple who mat­ter for the world”, and the Sum­mer Fel­lows pro­gram seemed like a bet­ter con­cretiza­tion of the same. (The pro­gram re­quired a lot of new cur­ricu­lum be­yond what we had be­fore, and a lot of skill de­vel­op­ment on the part of our teach­ing staff; and even so, and de­spite Nate’s call­ing it a “re­sound­ing suc­cess”, we had a feel­ing of leav­ing a lot of op­por­tu­nity on the table; op­por­tu­nity we in­tend to pick up in our sec­ond MIRI Sum­mer Fel­lows pro­gram this com­ing sum­mer).

From the origi­nal “con­crete goals” list: goal three was a bit wishy-washy but was prob­a­bly done. Goals four and five we did not even mea­sure to see if we hit. We should and will mea­sure this, and will let you know when we do; it seems good that we op­por­tunis­ti­cally put our all into the sum­mer fel­lows pro­gram (and okay to de-em­pha­size old goals in pur­suit of that), but good also to then fol­low it up for the sake of feed­back loops and hon­esty.

Or­ga­ni­za­tional capital

2015 was the year in which we fi­nally man­aged to stop wear­ing all the hats thanks to a huge in­crease in or­ga­ni­za­tional cap­i­tal. At the start of 2015 work­shops were stress­ful for staff. Between work­shops, our work­days were clut­tered with a dis­pro­por­tionate amount of at­ten­tion spent on lo­gis­tics, alumni fol­lowups, and tasks like ac­count­ing.

This stress and clut­ter was part of what was pre­vent­ing us from see­ing what we were do­ing, and figur­ing out how to ac­tu­ally con­tribute to the world; smooth­ing out the wrin­kles in our day-to-day work­flow was (we think) a ma­jor step­ping stone to­ward dis­cov­er­ing our min­i­mum strate­gic product.

That’s why we spent a lot of time and effort this year on stream­lin­ing op­er­a­tions and in­creas­ing spe­cial­iza­tion so that we could both free the ca­pac­ity to fo­cus on de­vel­op­ing the art and cre­ate the ca­pac­ity to scale our work­shops. We sys­tem­atized tasks like ac­count­ing and venue searches, and be­gan us­ing alumni vol­un­teers as fol­low up men­tors to sup­ple­ment our newly-cre­ated post-work­shop email ex­er­cises and on­line hang­outs. Th­ese efforts cul­mi­nated in two new hires—Pete Michaud and Dun­can Sa­bien—and a re­or­ga­ni­za­tion of CFAR into two sub­teams, Core (fo­cused on op­er­a­tions) and Labs (fo­cused on re­search).

For a com­plete overview of what we in­tend to ac­com­plish in 2016, see Am­bi­tions for 2016 be­low.

Some snap­shots from our ra­tio­nal­ity development

There is the pro­cess by which we im­prove a work­shop, and there is the pro­cess by which we im­prove our un­der­stand­ing of how ra­tio­nal­ity works at its core. The two pro­cesses don’t always help one an­other, but this year they did.

How we got there:

  • As it turns out, at­tempt­ing to cre­ate AI risk sci­en­tists (as op­posed to boost­ing the sci­en­tist-na­ture of ev­ery­day peo­ple) put a sub­tle but very differ­ent spin on the teach­ing of Se­quences-style epistemic ra­tio­nal­ity. It helped that the re­searchers were them­selves try­ing to model mind-like pro­cesses and that they stub­bornly in­sisted on build­ing re­lated mod­els of what the heck we were try­ing to con­vey.

  • MIRI Sum­mer Fel­lows was also a pro­ject we could just ac­tu­ally see mat­tered, and there’s noth­ing quite like ac­tual stakes when it comes to cre­at­ing a sense of drive and pur­pose, and be­ing will­ing to up­date.

  • Im­prov­ing or­ga­ni­za­tional cap­i­tal cre­ated a pos­i­tive feed­back loop. Work­ing to make our work­shops “crisp”—to clean up the meth­ods and metaphors that weren’t pul­ling their weight—helped make more of what we knew more visi­ble.

Here are some brief high­lights of the new Art of Ra­tion­al­ity that we’re cur­rently see­ing:

  • One pillar, not three. CFAR has long talked about want­ing to boost three dis­tinct things in our par­ti­ci­pants (com­pe­tence, epistemic ra­tio­nal­ity, and do-good­ing). But we’ve had the strong sense that there were ways to strengthen all three through the prac­tice of a sin­gle, unified art of “ap­plied ra­tio­nal­ity” (for in­stance, a deep un­der­stand­ing of re­duc­tion­ism seems to help with all three). Re­cently, we’ve got­ten bet­ter at ar­tic­u­lat­ing how this link works. For ex­am­ple:

  • Dou­ble Crux is a struc­tured for­mat for col­lab­o­ra­tively find­ing the truth in cases where two peo­ple dis­agree. In­stead of non-in­ter­ac­tively offer­ing pieces of their re­spec­tive plat­forms, peo­ple jointly seek the ac­tual ques­tion at the crux of the dis­agree­ment—the root un­cer­tainty that has the po­ten­tial to af­fect both of their be­liefs. We in­tro­duced this as an epistemic ra­tio­nal­ity tech­nique, and used in in this way at e.g. EA Global, where peo­ple ar­gued about cause pri­ori­ti­za­tion; it then made its way also into our ma­te­rial on com­pe­tence and on how to sus­tain­ably care deeply about the world. (See the next two bul­let points.)

  • Com­pe­tence as “deep/​in­ter­nal epistemic ra­tio­nal­ity.” If I am fre­quently late to ap­point­ments and “don’t want to be,” one can frame this as stem­ming from an in­ac­cu­rate an­ti­ci­pa­tion some­where in my mind—per­haps I mis-an­ti­ci­pate whether my ac­tions will make me late, or per­haps I dis­agree with my­self as to whether late­ness in fact harms my goals. Either way, it can be helpful (in our ex­pe­rience) to “in­ter­nally dou­ble crux” the ap­par­ent dis­agree­ment (i.e., to play the dou­ble crux game be­tween two differ­ent mod­els within my own head, work­ing un­til I have both a bet­ter model and a bet­ter ac­tual out­come). More gen­er­ally, we are in­creas­ingly mak­ing head­way on “com­pe­tence” or “in­stru­men­tal ra­tio­nal­ity” prob­lems via tech­niques aimed at in­te­grat­ing ac­cu­rate be­liefs into all parts of one’s psy­che.

  • Do-good­ing and epistemic ra­tio­nal­ity. “Do-good­ing” would seem to be a goal that some have and oth­ers don’t, and it would seem odd to try to shift goals by learn­ing epistemic ra­tio­nal­ity. But it seems to many of us (in­for­mally, anec­do­tally) that there is a kind of “deep epistemic ra­tio­nal­ity” that doesn’t change one’s goals, but does help one make ac­tual con­tact with what is at stake in the world, and with the parts of one’s psy­che that already care about those stakes… and this can some­times help in prac­tice to build deep, sus­tain­able car­ing. The idea is again to e.g. no­tice a part of you that thinks the world mat­ters, and a part of you that is afraid to look in that di­rec­tion, and help these parts trade model-pieces and up­date back and forth (dou­ble crux, again). For an early at­tempt to ar­tic­u­late pieces of this “art of con­nect­ing to deep car­ing”, see Val’s re­cent post on griev­ing.

  • Teach­ing the syn­the­sis. Our pre-2015 work­shops were made of tech­niques, which was like sound­ing out words a let­ter at a time (C-A-T…C…Ca…Cat!). After years of try­ing to use these tech­niques to point at the deeper skill (Cat! Hat! An­tidis­es­tab­lish­men­tar­i­anism!), we’ve fi­nally found fram­ings and ex­pla­na­tions (like this one) that ac­tu­ally bridge the gap. Those fram­ings, plus an ex­plicit em­pha­sis on syn­the­sis and the ad­di­tion of peer-to-peer tu­tor­ing, have suc­cess­fully trans­formed the tech­niques into step­ping stones to­ward the ac­tual art. (The tech­niques are now stuffed into the first two days; the syn­the­sis, and the rhythms of us­ing ap­plied ra­tio­nal­ity in prac­tice, now oc­cupy the sec­ond half of the work­shop and give peo­ple a bet­ter sense of the lived feel­ing of the art. We think.)

This is the be­gin­ning of work that we’re poised to ex­pand and im­prove in the com­ing year via our new Labs group.

Fi­nan­cial Ret­ro­spec­tive for 2015

Gen­eral overview

Our net cash­flow for the year is about $14k pos­i­tive so far, though with­out any fur­ther rev­enue we ex­pect to be around $30k nega­tive by the end of De­cem­ber 2015, as most of our large ex­penses (rent, pay­roll, etc.) oc­cur at the end of the month. Note that this in­cludes dona­tion rev­enue from last year’s win­ter fundraiser.

Our ba­sic monthly op­er­at­ing costs for 2015 have av­er­aged $40k, al­though the av­er­age af­ter Septem­ber went up to $44k due to chang­ing and slightly ex­pand­ing our team. This is the num­ber we use to de­ter­mine burn rate.

$30k of this was pay­roll in the last quar­ter, and the rest was split amongst rent and util­ities, park­ing, office sup­plies, meals, and mis­cel­la­neous. Many of these re­sources are used for in-office events like test ses­sions, Less Wrong mee­tups, and ra­tio­nal­ity train­ing ses­sions; each staff mem­ber has a differ­ent and of­ten chang­ing split of per­centage time work­ing on op­er­a­tions, cur­ricu­lum de­sign, teach­ing, data anal­y­sis, etc. That’s why giv­ing a good num­ber for monthly over­head is tricky and un­re­li­able. But to give it a go, it looks like roughly a third of monthly ex­penses is for or­ga­ni­za­tion main­te­nance.

A bit over half of the rev­enue cov­er­ing this came from dona­tions. The rest came from net rev­enue from our stan­dard in­tro­duc­tory work­shops plus MIRI’s pay­ment for our run­ning MSFP. (More de­tails be­low.)

Main workshops

Our stan­dard in­tro­duc­tory work­shops serve sev­eral im­por­tant pur­poses for us. One of them is that we hope to de­velop use­ful prod­ucts that si­mul­ta­neously sup­port our mis­sion and also make CFAR less fis­cally de­pen­dent on dona­tions.

We ran four of these work­shops (three in the Bay Area and one in Bos­ton). They varied widely in both cost and rev­enue due to travel, test­ing out new venues, chang­ing the num­ber of par­ti­ci­pants per work­shop, and sev­eral other fac­tors. All told, ig­nor­ing costs of staff time (as that’s fac­tored into the above burn rate), CFAR main work­shops took in a to­tal of ~$123k net rev­enue (i.e., rev­enue ex­ceed­ing cost), or an av­er­age of ~$31k net rev­enue per work­shop. Com­pared to last year, this is down ~$107k to­tal, but up ~$6k per work­shop. This is due to us choos­ing to run less than half as many work­shops so as to fo­cus on:

  1. Mak­ing the work­shops more efficient

  2. Run­ning other pro­grams equally well

  3. Set­ting up bet­ter sys­tems both for work­shops and for research

In ad­di­tion, we’ve con­tinued a trend from last year: we’ve de­creased the per-work­shop cost in staff time, partly through stream­lined cur­ricu­lum and im­proved sys­tems and partly through train­ing vol­un­teers to con­duct fol­low-ups, free­ing up our core staff to build new pro­grams and spend more time de­vel­op­ing ad­vanced ra­tio­nal­ity the­ory and in­struc­tion. (The vol­un­teer train­ing also does dou­ble-duty: the origi­nal im­pe­tus for do­ing it was want­ing to help alumni benefit from the “learn by teach­ing” phe­nomenon, so we are both free­ing up staff time and also us­ing this to help deepen alums’ skill with ra­tio­nal­ity.)

Alumni events

CFAR typ­i­cally goes into alumni events (work­shops and the an­nual re­union) with the as­sump­tion that we’re tak­ing on a cost. We view these as op­por­tu­ni­ties to ex­plore po­ten­tially new ar­eas of ra­tio­nal­ity and also as ways of en­courag­ing and sup­port­ing the CFAR alumni com­mu­nity in their de­vel­op­ment as ra­tio­nal­ists and as a com­mu­nity. It has gen­er­ally been our policy that we don’t charge for alumni events, but in­stead we let our alumni know what the per cap­ita cost comes to and ask them to con­sider donat­ing to com­pen­sate.

We track the dona­tions that are in sup­port of these events sep­a­rately from our stan­dard gen­eral dona­tions. As a re­sult, we can pretty clearly see how much each event cost us on be­yond the as­so­ci­ated dona­tions. That is, we can see net cost. In that spirit, here is what we “paid” on net for each of our alumni pro­grams, ig­nor­ing staff time:

  • For net zero cost (par­ti­ci­pants cov­ered meals), we ran a one-day work­shop out of the CFAR office on ap­ply­ing Se­quences-style think­ing to one’s daily life and to hard prob­lems like exrisk, as part of our prepa­ra­tion for MSFP.

  • For net zero cost (again, par­ti­ci­pants cov­ered meals), we ran a 2-day work­shop out of the CFAR office on ap­ply­ing Se­quences-style think­ing to AI risk anal­y­sis, also as part of our prepa­ra­tion for MSFP.

  • For net zero cost (par­ti­ci­pants donated enough to cover venue and meals), we ran a “Ham­ming” work­shop in Bos­ton, to ex­plore what tech­niques are needed to iden­tify and dive into the most im­por­tant prob­lems one is cur­rently fac­ing (at work, in one’s per­sonal life, as an al­tru­ist, or in what­ever other do­main).

  • For ~$2k, we ran a men­tor­ing work­shop out of Tiburon, to train vol­un­teers to help us run large-scale work­shops and also to do fol­low-up con­ver­sa­tions with par­ti­ci­pants to help them benefit from the work­shop in the weeks & months af­ter­wards.

  • For ~$15k, we ran our an­nual alumni re­union. This year we had ~130 par­ti­ci­pants, with pre­sen­ta­tions and ex­er­cises on some an­gles on ra­tio­nal­ity that we think are promis­ing. Th­ese also seem to be a lot of fun and help to en­er­gize the alumni com­mu­nity and keep us in touch with fresh ideas from the com­mu­nity that haven’t yet been put in writ­ing.

  • For net zero cost, we have con­tinued to run a weekly “ra­tio­nal­ity dojo” out of the CFAR office, where alumni work to deepen their skills with ra­tio­nal­ity and ex­per­i­ment with pos­si­ble re­fine­ments or ad­di­tions to the art.

Spe­cial programs

This year we ran two main sum­mer pro­grams:

  • SPARC ran for its fourth year in a row. Cisco and MIRI cov­ered the costs of this pro­gram, so the non-time cost to CFAR was nil.

  • MIRI hired CFAR to run a three-week in­ten­sive Sum­mer Fel­lows Pro­gram (MSFP), aimed at iden­ti­fy­ing and de­vel­op­ing promis­ing math re­search tal­ent po­ten­tially re­lated to AI safety re­search. MIRI cov­ered the costs of run­ning MSFP and paid CFAR $85k to cover both cur­ricu­lum de­vel­op­ment time and time run­ning the pro­gram it­self.

In ad­di­tion, an un­named com­pany hired CFAR to run a small train­ing for them. The net fi­nan­cial effect on CFAR was zero: we charged enough to cover costs, view­ing this work­shop as an op­por­tu­nity to con­tinue ex­plor­ing how CFAR might tai­lor its ma­te­rial for par­tic­u­lar work­places or spe­cific needs.

Fi­nan­cial Summary

Our fi­nan­cial fo­cus this last year was less on mak­ing money now and more on es­tab­lish­ing in­ter­nal in­fras­truc­ture and strate­gies for de­vel­op­ing solid in­come go­ing for­ward.

We’re now in an ex­cel­lent po­si­tion to make CFAR much less de­pen­dent on dona­tions go­ing for­ward while si­mul­ta­neously putting more fo­cused effort on de­vel­op­ment, test­ing, and shar­ing of ra­tio­nal­ity tools than we’ve been able to do in the past.

This has made 2016 look very promis­ing — but it has also put us in a difficult po­si­tion right now.

We’re farther be­hind right now than we were this time last year, and we need some cap­i­tal to im­ple­ment the plans we have in mind. Pre­dict­ing mar­kets is always hard, but we think that with one more fi­nan­cial push this win­ter, we can both im­prove our con­tri­bu­tion to the de­vel­op­ment of ra­tio­nal­ity and also make CFAR largely or maybe even en­tirely fi­nan­cially self-sus­tain­ing in 2016.

Am­bi­tions for 2016

Hit­ting Scale

CFAR’s mis­sion cashes out when peo­ple we equipped to think bet­ter and do more are ac­tu­ally in po­si­tions where they are chang­ing the fu­ture of our world for the bet­ter.

With our ex­ter­nal brand and our po­si­tion­ing within the com­mu­nity, we are per­haps uniquely well po­si­tioned to at­tract bright peo­ple, ori­ent them to the val­ues of sys­tem­at­i­cally truer be­liefs and world scale im­pact, and then make sure they get into the high­est lev­er­age po­si­tions they can fill.

We’ve spent the last three years lev­el­ing up our own abil­ity to trans­mit a skil­lset and cul­ture that we be­lieve will move the nee­dle in the right di­rec­tion, and now is the time to ex­e­cute at scale.

Core and Labs

To make scal­ing pos­si­ble and still be able to com­pe­tently tackle the ped­a­gog­i­cal challenges we face, CFAR has ar­ranged it­self into two di­vi­sions: CFAR Core and CFAR Labs.

Pete Michaud (that’s me!) was hired to man­age Core op­er­a­tions, in­clud­ing work­shop and cur­ricu­lum pro­duc­tion and lo­gis­tics. Anna Sala­mon will take the helm of CFAR Labs, which will be prin­ci­pally re­spon­si­ble for an­swer­ing the ques­tions:

  • What are the high­est im­pact skil­lsets?

  • How can we de­tect them?

  • How can we train them?

  • Is our train­ing ac­tu­ally af­fect­ing the im­por­tant di­men­sions at the high end?

The Plan

Broadly, in or­der to at­tract more peo­ple, level them up re­li­ably, and make sure they land in the high­est im­pact po­si­tions they can, our plan is to:

  1. Sub­stan­tially in­crease work­shop volume

  2. Ex­pand our com­mu­nity and con­tinued train­ing opportunities

  3. Directly ad­dress tal­ent gaps by work­ing with other organizations

  4. Con­tinue in­creas­ing the qual­ity of our instruction

In­crease Work­shop Volume

We in­tend to sub­stan­tially in­crease the num­ber of in­take work­shops we run and the num­ber of par­ti­ci­pants we can serve per work­shop.

“In­take work­shops” here means work­shops for peo­ple who haven’t nec­es­sar­ily been ex­posed to our ma­te­rial or com­mu­nity; said an­other way, these are work­shops that will bring new peo­ple into our alumni net­work.

We are ac­tively seek­ing a di­rect sales man­ager who can not only gen­er­ate leads but close work­shop sales. An al­ter­na­tive is to hire a two per­son mar­ket­ing and sales team who to­gether can gen­er­ate leads and place prospects into work­shops.

With the help of that new out­reach team, we hope to add on the or­der of 1,000 new alumni in 2016, in­creas­ing our to­tal through­put by nearly an or­der of mag­ni­tude.

Han­dling that new vol­ume of alumni will re­quire in­creas­ing at­ten­tion to stream­lin­ing op­er­a­tions, which CFAR Core is han­dling par­tially by adding new team mem­bers and clar­ify­ing roles. In ad­di­tion to me as the new Manag­ing Direc­tor, we’ve already hired Dun­can Sa­bien, an ex­pe­rienced ed­u­ca­tor and ro­bustly ca­pa­ble op­er­a­tions gen­er­al­ist. Aside from the out­reach team already men­tioned, we also in­tend to hire a com­mu­nity man­ager (see be­low for de­tails) and office as­sis­tant to fill in the in­evitable gaps of an or­ga­ni­za­tion mov­ing as fast as we in­tend to.

Com­mu­nity and Con­tinued Train­ing Opportunities

Bring­ing more tal­ented peo­ple into the alumni net­work is only half the bat­tle. Once par­ti­ci­pants have gone from “Zero to One,” only a com­mu­nity of prac­tice can help en­sure con­tinued growth for most peo­ple.

We be­lieve that one of the pri­mary benefits of CFAR train­ing is on­go­ing par­ti­ci­pa­tion in the alumni com­mu­nity, both lo­cal to the Bay Area and through­out the world in lo­cal mee­tups and on­line. That’s why we’re go­ing to in­vest in mak­ing the com­mu­nity stronger, with even more alumni events, ex­per­i­men­tal work­shops, and deep-dive classes into spe­cific as­pects of our cur­ricu­lum.

Per­haps the crown jewel of our com­mu­nity pro­gram is our Men­tor­ship Train­ing Pro­gram (MTP), which be­gan its life as our TA Work­shop. We in­tend to de­velop that seed into a ro­bust pipeline ca­pa­ble of trans­form­ing work­shop par­ti­ci­pants into trained ra­tio­nal­ity in­struc­tors.

One ma­jor benefit of the MTP will be that we’ll have more men­tors and in­struc­tors to han­dle the in­creased load of all these work­shops, classes, and other events.

But the MTP is a ma­jor growth op­por­tu­nity even for peo­ple who aren’t nec­es­sar­ily in­ter­ested in spread­ing the art of ra­tio­nal­ity them­selves; we be­lieve from our ex­pe­rience over the past three years that the best way to fully grok the art is to be im­mersed in a field of peers striv­ing for the same, and ul­ti­mately to be able to teach it your­self.

This is what we in­tend to cre­ate with the MTP and new fo­cus on com­mu­nity events.

To plan and man­age all these alumni events, we’re look­ing for a ca­pa­ble com­mu­nity man­ager.

Directly Ad­dress­ing Ta­lent Gaps

In ad­di­tion to our clas­sic work­shops and gen­eral ed­u­ca­tion alumni pro­grams, we’ll also be at­tempt­ing to ramp up our tar­geted work­shops meant to fill tal­ent gaps for spe­cific or­ga­ni­za­tions.

For ex­am­ple, we’ll run our sec­ond MIRI Sum­mer Fel­lows Pro­gram, as well as a grant funded by the Fu­ture of Life In­sti­tute to help promis­ing up­com­ing AI re­searchers think about AI safety. We’re in con­ver­sa­tion with other or­ga­ni­za­tions, and it’s our in­ten­tion to have an in­creas­ing num­ber of these work­shops that fo­cus on think­ing skills needed for par­tic­u­lar tasks in or­der to help fill crit­i­cal gaps in im­por­tant or­ga­ni­za­tions on very small time hori­zons.

If fund­ing per­mits and our ex­per­i­ments in this area go well, we in­tend to make these types of work­shops more fre­quent, and per­haps ex­pand on past suc­cess with pro­grams like a Euro­pean SPARC, and pos­si­ble “sum­mer camp” style events where we try to iden­tify par­tic­u­larly tal­ented high school stu­dents for train­ing and re­cruit­ment into ex­is­ten­tial risk re­search.

Labs: In­for­mal ex­per­i­men­ta­tion to­ward a bet­ter “Ap­plied Ra­tion­al­ity”

The split be­tween Core and Labs doesn’t only al­low fo­cus on op­er­a­tions—it also al­lows our Lab folk to in­vest in the in­for­mal ex­per­i­ments, ar­gu­ments, data-gath­er­ing, etc. that seems, over time, to con­duce to a bet­ter ap­plied ra­tio­nal­ity.

(This pro­cess is messy. Ra­tion­al­ity to­day is not at the level of New­ton. It isn’t even at the level of Ptolemy, who, de­spite the mock­a­bil­ity of the nested-epicy­cles method, could pre­dict the mo­tions of the planets with great pre­ci­sion. Ra­tion­al­ity is more at the level of a tod­dler run­ning around, putting ev­ery­thing in its mouth, and end­ing up thereby with a more in­te­grated in­for­mal world-model by hav­ing ex­am­ined many ex­am­ple-ob­jects through sev­eral senses each. Our aim this year in Labs is ba­si­cally to put many many things in our mouths rapidly, and to ar­gue about mod­els in be­tween, and to es­pe­cially ex­pose our­selves to peo­ple who are work­ing on is­sues that mat­ter in already-very-com­pe­tent ways who we can nev­er­the­less try to make bet­ter, and to try in this way to get a bet­ter sense of the higher-end parts of “ra­tio­nal­ity”.)

Toward this end, Labs is cur­rently:

  • Offer­ing one-on-one coach­ing to quite a few in­di­vi­d­u­als who seem to be con­tribut­ing to the world in a high-end way; and try­ing to figure out how they’re do­ing what they’re do­ing, and what pieces may help them con­tribute more;

  • Work­ing to­ward more ro­bust and ex­plicit mod­els of the un­der­ly­ing mechanisms that cre­ate drive, sci­en­tific and epistemic skill, and rele­vant real-world com­pe­tence (and how to in­ter­vene upon them);

  • Creat­ing new writ­ten ra­tio­nal­ity se­quences meant to ex­pand upon, aug­ment, and im­prove the origi­nal se­quences that brought so many peo­ple into the cul­ture of be­ing “less wrong,” and ori­ented them around au­da­cious goals that ac­tu­ally make a differ­ence;

  • Plan­ning ex­per­i­men­tal work­shops of varied sorts, aiming to boost peo­ple fur­ther to­ward “ac­tu­ally use­ful skill-lev­els in ap­plied ra­tio­nal­ity”.

We are very ex­cited, and ex­pect that art de­vel­op­ment will be much eas­ier now that we have a sub­team who is free to just ac­tu­ally fo­cus on it. (Last year, we were all do­ing work­shop ad­mis­sions, lo­gis­tics, ac­count­ing, …)

Limi­ta­tions and Updates

The pri­mary limit­ing fac­tor in these plans is our abil­ity to at­tract a truly ex­cel­lent sales per­son or team. With a suffi­cient work­shop par­ti­ci­pa­tion, cash­flow bot­tle­necks are bro­ken and we‘ll achieve economies of scale that will fun­da­men­tally trans­form our op­er­a­tions.

Failing that re­cruit­ment, the next best al­ter­na­tive is to grow or­gan­i­cally through the MTP and other com­mu­nity pro­grams. That is a much slower pro­cess, but pushes us in the same fun­da­men­tal di­rec­tion.

And as always, our plans com­ing into con­tact with the re­al­ity of 2016 will cor­rectly cause us to up­date, iter­ate, and po­ten­tially pivot given new ev­i­dence and in­sight.

The path for­ward, and how you can help

CFAR’s mis­sion is to gather to­gether peo­ple with the po­ten­tial for real and mean­ingful im­pact, and to cause them to come closer to meet­ing that po­ten­tial. It doesn’t much mat­ter whether you think we’re un­der a tick­ing clock of ex­is­ten­tial risk, or you’re con­cerned about a mil­lion hu­mans dy­ing ev­ery week, or you’re sim­ply grumpy that we haven’t got­ten a hu­man past low earth or­bit since 1972—our in­di­vi­d­ual and col­lec­tive think­ing skill is a key bot­tle­neck on our fu­ture.

Ap­plied ra­tio­nal­ity, more than al­most any­thing else, has a shot at be­ing a truly all-pur­pose tool in hu­man­ity’s toolkit, and the big­ger the prob­lems on the hori­zon, the more vi­tal that tool be­comes.

2016 will be a par­tic­u­larly crit­i­cal year in CFAR’s his­tory. We’re re­struc­tur­ing our team in pretty ma­jor ways, and find­ing the right team mem­bers (or not) will de­ter­mine our abil­ity to get the right char­ac­ter and cul­ture from this new be­gin­ning; and we’ve had at least three good peo­ple in the last eight months who we wanted to hire, and who wanted to work for us, but who re­quired salaries we couldn’t af­ford. Begin­nings are far eas­ier times in which to make change, and this is the clos­est we’ve come to a fresh be­gin­ning—and the time we’ve most ex­pected differ­en­tial im­pact from marginal dona­tion—since our inau­gu­ral fundraiser of late 2012.

The world of AI risk is chang­ing rapidly, and de­ci­sions made over the com­ing months will shape the fu­ture of the field—it would be well to get rele­vant train­ing pro­grams go­ing now, and not to wait for some later ad­di­tional hard-won new be­gin­ning for CFAR in 2018 or some­thing. The strate­gic com­pe­tence we will have go­ing into the spring is likely to be the differ­ence be­tween a CFAR that ac­tu­ally mat­ters, and one that sounds good but is ul­ti­mately ir­rele­vant.

There are at least four ma­jor ways to help:

  1. Donate di­rectly to our win­ter fundrais­ing drive. This is the most straight­for­ward way to help, and makes a cat­e­gor­i­cal differ­ence in our abil­ity to ex­e­cute the mis­sion. (A large ma­jor­ity of our fund­ing comes from small donors.)

  2. If you’re in­ter­ested in ra­tio­nal­ity, or in the larger ques­tions of hu­man­ity’s fu­ture and ex­is­ten­tial risk, con­sider read­ing the Se­quences, or oth­er­wise work­ing to im­prove your think­ing and world-mod­el­ing skill. (Strong com­mu­nity episte­mol­ogy is ex­tremely helpful.)

  3. We’re always look­ing for new alumni, par­tic­u­larly those who care about both ra­tio­nal­ity and the world. If you haven’t been, con­sider ap­ply­ing to a CFAR work­shop; and if you have been, con­sider men­tion­ing it to peo­ple who fit said de­scrip­tion.

  4. If you’re in­ter­ested in join­ing us for the long haul, we’re cur­rently look­ing to hire a sales man­ager, a com­mu­nity man­ager, and an office as­sis­tant (fund­ing per­mit­ting). We’ve iden­ti­fied these three roles as the high­est-im­pact ad­di­tions to the CFAR staff, and are ea­ger to hear from en­thu­si­as­tic and qual­ified can­di­dates.

This is the mis­sion; these are the steps. CFAR has made sub­stan­tial progress on build­ing a tal­ent pipeline for clear thinkers and world chang­ers, in large part thanks to gen­er­ous con­tri­bu­tions of time, money, en­ergy, and in­sight from peo­ple like you. We’d like to see a world where this goal has been achieved, and your sup­port is what gets us there. Thanks for read­ing; do send us any thoughts; and do please con­sider donat­ing now.