CFAR in 2014: Continuing to climb out of the startup pit, heading toward a full prototype

Sum­mary: We out­line CFAR’s pur­pose, our his­tory in 2014, and our plans head­ing into 2015.

One of the rea­sons we’re pub­lish­ing this re­view now is that we’ve just launched our an­nual match­ing fundraiser, and want to provide the in­for­ma­tion our prospec­tive donors need for de­cid­ing. This is the best time of year to de­cide to donate to CFAR. Dona­tions up to $120k will be matched un­til Jan­uary 31.[1]

To briefly pre­view: For the first three years of our ex­is­tence, CFAR mostly fo­cused on get­ting go­ing. We fol­lowed the stan­dard recom­men­da­tion to build a ‘min­i­mum vi­able product’, the CFAR work­shops, that could test our ideas and gen­er­ate some rev­enue. Com­ing into 2013, we had a work­shop that peo­ple liked (9.3 av­er­age rat­ing on “Are you glad you came?”; a more re­cent ran­dom sur­vey showed 9.6 av­er­age rat­ing on the same ques­tion 6-24 months later), which helped keep the lights on and gave us ar­tic­u­late, skep­ti­cal, se­ri­ous learn­ers to iter­ate on. At the same time, the work­shops are not ev­ery­thing we would want in a CFAR pro­to­type; it feels like the cur­rent core work­shop does not stress-test most of our hopes for what CFAR can even­tu­ally do. The premise of CFAR is that we should be able to ap­ply the mod­ern un­der­stand­ing of cog­ni­tion to im­prove peo­ple’s abil­ity to (1) figure out the truth (2) be strate­gi­cally effec­tive (3) do good in the world. We have dreams of scal­ing up some par­tic­u­lar kinds of san­ity. Our next goal is to build the min­i­mum strate­gic product that more di­rectly jus­tifies CFAR’s claim to be an effec­tive al­tru­ist pro­ject.[2]

High­lights from 2014

Our brand per­cep­tion im­proved sig­nifi­cantly in 2014, which mat­ters be­cause it leads to com­pa­nies be­ing will­ing to pay for work­shop at­ten­dance. We were cov­ered in Fast Com­panytwice—the Wall Street Jour­nal, and The Rea­soner. Other men­tions in­clude Forbes, Big Think, Bo­ing Bo­ing, and Life­hacker. We’ve also had some in­ter­est in po­ten­tial train­ing for tech com­pa­nies.

Our cur­ricu­lum is gain­ing a sec­ond tier in the form of alumni work­shops. We tried 4 ex­per­i­men­tal alumni work­shops, 3 of which went well enough to be worth iter­at­ing:

  • The Ham­ming Ques­tion: “What are the most im­por­tant prob­lems in your life, and why aren’t you work­ing on them?” This 2.5-day work­shop was ex­tremely well re­ceived, and gave rise to a new unit for our in­tro­duc­tory work­shop.

  • As­sist­ing Others[3]: A two-week­end (train­ing, then practicum) work­shop in­ves­ti­gat­ing the close link be­tween helping oth­ers de­bug their prob­lems, and bet­ter de­bug­ging your own prob­lems. We ran a ver­sion of this in the Bay Area that worked, and an abridged ver­sion in the UK that didn’t. (This was our fault. We’re sorry.)

  • At­ten­tion Work­shop: A 2.5-day work­shop on clear­ing men­tal space. This failed and taught us some im­por­tant points about what doesn’t work.

  • Epistemic Ra­tion­al­ity for Effec­tive Altru­ists: A stan­dalone 2.5-day work­shop on ap­ply­ing tech­niques from the in­tro­duc­tory work­shop to fac­tual ques­tions, es­pe­cially those re­lated to effec­tive al­tru­ism. (More on this be­low.) The at­ten­dees from this and the Ham­ming work­shop spon­ta­neously or­ga­nized re­cur­ring mee­tups for them­selves.

Our alumni com­mu­nity con­tinues to grow. There are now 550 CFAR alumni, count­ing 90 from SPARC. It’s a high-ini­ti­a­tive group. Star­tups by CFAR alumni in­clude: App­ti­mize; Bel­lroy; Bee­minder; Com­plice; Code Com­bat; Draftable; MealSquares; Oh­mData; Prax­amed; Ves­parum; Tele­port; Watu; Wave; ZeroCater.[4] There is a highly ac­tive mailing list with over 400 mem­bers, and over 600 con­ver­sa­tion threads, over 30 of which were ac­tive in the last month. We also ran our first-ever alumni re­union, and started a weekly alumni dojo. This en­abled fur­ther cur­ricu­lar ex­per­i­men­ta­tion, and al­lowed alumni ideas and ex­pe­riences to feed into cur­ricu­lar de­sign.

SPARC hap­pened again, with more-honed cur­ricu­lum and nearly twice as many stu­dents.

Ba­sic op­er­a­tions im­proved sub­stan­tially. We’ll say more on this in sec­tion 2.

Iter­a­tion on the flag­ship work­shop con­tinues. We’ll say more on this (in­clud­ing de­tails of what we learned, and what re­mains puz­zling) in sec­tion 3.

Im­prov­ing operations

The two driv­ing themes of CFAR dur­ing 2014 were mak­ing our op­er­a­tions more sta­ble and sus­tain­able, and our suc­cess­ful strug­gle to pull our in­tro­duc­tory work­shop out of a lo­cal op­ti­mum and get back on track to­ward some­thing that is more like a ‘full pro­to­type’ of the CFAR con­cept.

At the end of 2013, we had nega­tive $30,000 and had bor­rowed money to make pay­roll, plac­ing us in the ‘very early stage, strug­gling startup’ phase. Al­most all of our reg­u­lar op­er­a­tions, such as schedul­ing in­ter­views for work­shop ad­mis­sions, were be­ing done by hand. Much of our real progress in 2014 con­sisted of mak­ing things run smoothly and get­ting past the phase where tread­ing wa­ter re­quires so many weekly hours that no­body has time for any­thing else. Or­ga­ni­za­tional cap­i­tal is real, and we had to learn the habit of set­ting aside time and effort for ac­cu­mu­lat­ing it. (In ret­ro­spect, we were around a year too slow to en­ter this phase, al­though in the very early days it was prob­a­bly cor­rect to be build­ing ev­ery­thing to throw away.)

A few of the less com­pletely stan­dard les­sons we think we learned are as fol­lows:

  • Rank-or­der busy­ness, es­pe­cially if you’re pass­ing up or­ga­ni­za­tional-cap­i­tal im­prove­ment tasks. Think “This is one of the 3 bus­iest week­ends of the year” and not “I’m too busy to do it right now.” This says how large a hit you get from al­low­ing “im­por­tant but not ur­gent” to be post­poned dur­ing times which are at least that busy, and it forces cal­ibra­tion.

  • Even in crunch times, take mo­ments to up­date. (E.g., do one-sen­tence jour­nal en­tries about what just hap­pened /​ ideas for im­prove­ment af­ter each Skype call.) The crunchiest mo­ments are of­ten also the most im­por­tant to op­ti­mize, and even a sin­gle sen­tence of thought can give you a lot of the value from con­tin­u­ing to op­ti­mize.

  • Use ar­ith­metic to es­ti­mate the time/​money/​staff cost of con­tin­u­ing to do Y the usual way, ver­sus op­ti­miz­ing it. If the ar­ith­metic in­di­cates 10X or more sav­ings, do it even if it re­quires some up-front cost. (No re­ally, ac­tu­ally do the ar­ith­metic.)

We also learned a large num­ber of other stan­dard les­sons. As of the end of 2014, we think that ba­sic pro­cesses at CFAR have im­proved sub­stan­tially. We have sev­eral months of run­way in the bank ac­count—our fi­nances are still pre­car­i­ous, but at least not nega­tive, and we think they’re on an im­prov­ing path. Our work­shop in­ter­views and fol­low-up ses­sions have an on­line in­ter­face for schedul­ing in­stead of be­ing done by hand (which frees a rather sur­pris­ing amount of en­ergy). The work­shop in­struc­tors are al­most en­tirely not do­ing work­shop ops. Ac­count­ing has been stream­lined. The office has nu­tri­tious food eas­ily available, with­out the need to quit work­ing when one gets hun­gry.

CFAR feels like it is out of the very-early-startup stage, and able to start fo­cus­ing on things other than just stay­ing afloat. We feel suffi­ciently non-over­whelmed that we can take the high­est-value op­por­tu­ni­ties we run into, rather than hav­ing all staff mem­bers over­com­mit­ted at all times. We have a clearer sense of what CFAR is try­ing to do; of what our in­ter­nal de­ci­sion-mak­ing struc­ture is; of what each of our roles is; of the value of build­ing good in­sti­tu­tions for record­ing our heuris­tic up­dates; etc. And we have will, mo­men­tum, and knowl­edge with which to con­tinue im­prov­ing our or­ga­ni­za­tional cap­i­tal over 2015.

At­tempts to go be­yond the cur­rent work­shop and to­ward the ‘full pro­to­type’ of CFAR: our ex­pe­rience in 2014 and plans for 2015

Where are we spend­ing the div­i­dends from that or­ga­ni­za­tional cap­i­tal? More am­bi­tious cur­ricu­lum. Speci­fi­cally, a “full pro­to­type” of the CFAR aim.

Re­call that the premise of CFAR is that we should be able to ap­ply the mod­ern un­der­stand­ing of cog­ni­tion to im­prove peo­ple’s abil­ity to (1) figure out the truth; (2) be strate­gi­cally effec­tive; and (3) do good in the world. By a “pro­to­type”, or “min­i­mum strate­gic product”, we mean a product that ac­tu­ally demon­strates that the above goal is vi­able (and, thus, that more di­rectly jus­tifies CFAR’s claim to be an effec­tive al­tru­ist pro­ject). For CFAR, this will prob­a­bly re­quire mean­ingfully boost­ing some frac­tion of par­ti­ci­pants along all three axes (epistemic ra­tio­nal­ity; real-world com­pe­tence; and ten­dency to do good in the world). [5]

So that’s our tar­get for 2015. In the rest of this sec­tion, we’ll talk about what CFAR did dur­ing 2014, go into greater de­tail on our at­tempt to build a cur­ricu­lum for epistemic ra­tio­nal­ity, and de­scribe our 2015 goals in more de­tail.

---

One of the fu­ture premises of CFAR is that we can even­tu­ally ap­ply the full sci­en­tific method to the prob­lem of con­struct­ing a ra­tio­nal­ity cur­ricu­lum (by mea­sur­ing vari­a­tions, count­ing things, re-test­ing, etc.) -- we aim to even­tu­ally be an ev­i­dence-based or­ga­ni­za­tion. In our pre­sent state this con­tinues to be a lot harder than we would like; and our 2014 work­shop, for ex­am­ple, was done via crude “what do you feel you learnt?” sur­veys and our own gut im­pres­sions. The sort of ran­dom­ized trial we ran in 2012 is ex­tremely ex­pen­sive for us be­cause it re­quires ran­domly not ad­mit­ting work­shop at­ten­dees, and we don’t presently have good-enough out­come met­rics to jus­tify that ex­pense. Life out­comes, which we see as a gold stan­dard, are big noisy vari­ables with many con­tribut­ing fac­tors—there’s a lot that adds to or sub­tracts from your salary be­sides hav­ing at­tended a CFAR work­shop, which means that the ran­dom­ized tests we can af­ford to run on life out­comes are un­der­pow­ered. Test­ing later abil­ity to perform spe­cific skills doesn’t seem to stress-test the core premise in the same way. In 2014 we con­tinued to track cor­re­la­tional data and did more de­tailed ran­dom fol­lowup sur­veys, but this is just enough to keep such analy­ses in the set of things we reg­u­larly do, and re­mind our­selves that we are sup­posed to be do­ing bet­ter sci­ence later.

At the start of 2014, we thought our work­shops had reached a point of de­cent or­der, and we were con­tin­u­ing to tweak them. Part­way through 2014 we re­al­ized we had reached a lo­cal op­ti­mum and be­come stuck (well short of a full pro­to­type /​ min­i­mum strate­gic product). So then we smashed ev­ery­thing with a ham­mer and tried:

  • 4 differ­ent ad­vanced work­shops for alumni:

    • An epistemic ra­tio­nal­ity work­shop for effec­tive al­tru­ist alumni;

    • An alum­nus work­shop on fo­cus­ing at­ten­tion (failed);

    • An alum­nus work­shop on the Ham­ming Ques­tion, “What are your most im­por­tant life prob­lems? Why aren’t you solv­ing them?”

    • 2 at­tempts at an alum­nus work­shop on how to do 1-on-1 teach­ing /​ as­sis­tance of cog­ni­tive skills (first suc­ceeded, sec­ond failed; our fault).

  • A 1.5-day ver­sion of the in­tro­duc­tory work­shop;

  • A work­shop with only 10 par­ti­ci­pants with the en­tire class taught in a sin­gle room (ex­tremely pop­u­lar, but not yet scal­able);

  • Shorter mod­ules break­ing up the 60-minute-unit de­fault;

  • An un­con­fer­ence-style for­mat for the 2014 alumni re­union.

Th­ese ex­per­i­ments ended up feed­ing back into the flag­ship work­shop, and we think we’re now out of the lo­cal op­ti­mum and mak­ing progress again.

Epistemic ra­tio­nal­ity curriculum

In CFAR’s ear­liest days, we thought epistemic ra­tio­nal­ity (figur­ing out the an­swers to fac­tual ques­tions) was the main thing we were sup­posed to teach, and we took some long-suffer­ing vol­un­teers and started test­ing units on them. Then it turned out that while all of our ma­te­rial was pretty ter­rible, the epistemic ra­tio­nal­ity parts were even more ter­rible com­pared to the rest of it.

At first our model was that epistemic ra­tio­nal­ity was hard and we needed to be bet­ter teach­ers, so we set out to learn gen­eral teach­ing skills. Peo­ple be­gan to visi­bly en­joy many of our units. But not the units we thought of as “epistemic ra­tio­nal­ity”. They still visi­bly suffered through those.

We started to talk about “the curse of epistemic ra­tio­nal­ity”, and it made us worry about whether it would be worth hav­ing a CFAR if we couldn’t re­solve it some­how. Figur­ing out the an­swers to fac­tual ques­tions, the sort of sub­ject mat­ter that ap­pears in the Se­quences, the kind of work that we think of sci­en­tists as car­ry­ing out, felt to us like it was cen­tral to the spirit of ra­tio­nal­ity. We had a sense (and still do) that if all we could do was teach peo­ple how to set up trig­ger-ac­tion sys­tems for re­mem­ber­ing to lock their house doors, or even turn an ugh-y feel­ing of need­ing to do a job search into a se­ries of con­crete ac­tions, this still wouldn’t be mak­ing much progress on san­ity-re­quiring challenges over the next decades. We were wor­ried it wouldn’t con­tribute strate­gic po­ten­tial to effec­tive al­tru­ism.

So we kept the most es­sen­tial-feel­ing epistemic ra­tio­nal­ity units in the work­shop even de­spite par­ti­ci­pants’ low­ish unit-rat­ings, and de­spite our own feel­ing that those units weren’t “click­ing’, and we thought: “Maybe, if we have work­shops full of units that peo­ple like, we can just make them sit through some units that they don’t like as much, and get peo­ple to learn epistemic ra­tio­nal­ity that way”. The “didn’t like” part was painful no mat­ter what story we stuck on it. We rewrote the Bayes unit from scratch more or less ev­ery work­shop. All of our “epistemic ra­tio­nal­ity” units changed rad­i­cally ev­ery month.

One ray of light ap­peared in mid-2013 with the In­ner Si­mu­la­tor unit, which in­cluded tech­niques about imag­in­ing fu­ture situ­a­tions to see how sur­prised you felt by them, and us­ing this to de­ter­mine whether your In­ner Si­mu­la­tor re­ally strongly ex­pected a new hire to work out or whether you are in fact cer­tain that your pro­ject will be done by Thurs­day. This was some­thing we con­sid­ered to be an “epistemic ra­tio­nal­ity” unit at the time, and it worked, in the sense that it (a) set up con­cepts that fed into our other units, (b) seemed to ac­tu­ally con­vey some use­ful skills that peo­ple no­ticed they were learn­ing, and (c) peo­ple didn’t hate it.

(And it didn’t feel like we were just try­ing to smug­gle it in from ul­te­rior mo­tives about skills we thought effec­tive al­tru­ists ought to have, but that we were ac­tu­ally patch­ing con­crete prob­lems.)

A mir­a­cle had ap­peared! We ig­nored it and kept rewrit­ing all the other “epistemic ra­tio­nal­ity” units ev­ery month.

But a les­son that we only un­der­stood later started to seep in. We started think­ing of some of our other units as hav­ing epistemic ra­tio­nal­ity com­po­nents in them—and this in turn changed the way we prac­ticed, and taught, the other tech­niques.

The sea change that oc­curred in our think­ing might be sum­ma­rized as the shift from, “Epistemic ra­tio­nal­ity is about whole units that are about an­swer­ing fac­tual ques­tions” to there be­ing a truth el­e­ment that ap­pears in many skills, a point where you would like your Sys­tem 1 or Sys­tem 2 to see some par­tic­u­lar fact as true, or figure out what is true, or re­solve an ar­gu­ment about what will hap­pen next.

  • We used to think of Com­fort Zone Ex­pan­sion[6] as be­ing about de­sen­si­ti­za­tion. We would to­day think of it as be­ing about, for ex­am­ple, cor­rect­ing your Sys­tem 1′s an­ti­ci­pa­tion of what hap­pens when you talk to strangers.

  • We used to think of Urge Prop­a­ga­tion[6] as be­ing about ap­ply­ing be­hav­iorist con­di­tion­ing tech­niques to your­self. To­day we teach a very differ­ent tech­nique un­der the same name; a tech­nique that is about di­aloging with your af­fec­tive brain un­til sys­tem 1 and sys­tem 2 ac­quire a com­mon causal model of whether task X will in fact help with the things you most care about.

  • We thought of Tur­bocharg­ing[6] as be­ing about in­stru­men­tal tech­niques for ac­quiring skills quickly through prac­tice. To­day we would also frame it as, “Sup­pose you didn’t know you were sup­posed to be ‘Learn­ing Span­ish’. What would an out­side-ish view say about what skill you might be prac­tic­ing? Is it filling in blank lines in work­books?”

  • We were quite cheered when we tried en­tirely elimi­nat­ing the Bayes unit and found that we could iden­tify a de­pen­dency in other, clearly prac­ti­cal, units that wanted to call on the abil­ity to look for ev­i­dence or iden­tify ev­i­dence.

  • Our Fo­cused Grit and Hard De­ci­sions units are en­tirely “epistemic”—they are straight out just about ac­quiring more ac­cu­rate mod­els of the world. But they don’t feel like the old “curse of epistemic ra­tio­nal­ity” units, be­cause they be­gin with an ac­tual felt Sys­tem 1 need (“what shall I do when I grad­u­ate?” or similar), and they stay in con­tact with Sys­tem 1′s rea­son­ing pro­cess all the way through.

When we were or­ga­niz­ing the UK work­shop at the end of 2014, there was a mo­ment where we had the sud­den re­al­iza­tion, “Hey, maybe al­most all of our cur­ricu­lum is se­cretly epistemic ra­tio­nal­ity and we can or­ga­nize it into ‘Epistemic Ra­tion­al­ity for the Plan­ning Brain’ on day 1 and ‘Epistemic Ra­tion­al­ity for the Affec­tive Brain’ on day 2, and this makes our cur­ricu­lum so much denser that we’ll have room for the Ham­ming Ques­tion on day 3.” This didn’t work as well in prac­tice as it did in our heads (though it still went over okay) but we think this just means that the pro­cess of our di­gest­ing this in­sight is on­go­ing.

We have hopes of mak­ing a lot of progress here in 2015. It feels like we’re back on track to teach­ing epistemic ra­tio­nal­ity—in ways where it’s forced by need to use­fully tackle life prob­lems, not be­cause we tacked it on. And this in turn feels like we’re back on track to­ward teach­ing that im­por­tant thing we wanted to teach, the one with strate­gic im­pli­ca­tions con­tain­ing most of CFAR’s ex­pected fu­ture value.

(And the units we think of as “epistemic” no longer get rated lower than all our other units; and our alumni work­shop on Epistemic Ra­tion­al­ity for Effec­tive Altru­ists went over very well and does seem to have helped val­i­date the propo­si­tions that “Peo­ple who care strongly about EA’s fac­tual ques­tions are good au­di­ences for what we think of as rele­vant epistemic skills” and “Hav­ing learned CFAR ba­sics ac­tu­ally does help for learn­ing more ab­stract epistemic ra­tio­nal­ity later”.)

Goals for 2015

In 2015, we in­tend to keep build­ing or­ga­ni­za­tional cap­i­tal, and use those div­i­dends to keep push­ing on the epistemic ra­tio­nal­ity cur­ricu­lum, and push­ing to­ward the min­i­mum strate­gic pro­ject that stress-tests CFAR’s core value propo­si­tions. We’ve also set the fol­low­ing con­crete goals[7]:

  • Find some way to track a met­ric for ‘How likely we think this per­son is to end up be­ing strate­gi­cally use­ful to the world’, even if it’s ex­tremely crude.[8]

  • Ac­tu­ally start track­ing it, even if in­ter­nally, sub­jec­tively, and ter­ribly.

  • Try to boost alumni scores on the three com­po­nents of “Figure out true things”, “Be effec­tive” and “Do-good­ing” (from our ex­tremely crude mea­sure).

  • Cause 30 new peo­ple to be­come en­gaged in high-im­pact do-good­ing in some in­ter­est­ing way, in­clud­ing 10+ with out­side high sta­tus and no pre­vi­ous in­volve­ment with EA.

  • Cause 10 high-im­pact do-gooder alumni to say that, be­cause of in­ter­act­ing with CFAR, they be­came much more skil­led/​effec­tive/​well-tar­geted on strate­gi­cally im­por­tant things. Have this also be plau­si­ble to their cowork­ers.

Nuts, Bolts, and Fi­nan­cial Details

To­tal expenditures
Our to­tal ex­pen­di­tures in 2014 came up about $840k. This num­ber in­cludes about $330k of non-staff di­rect work­shop costs (hous­ing, food, etc.), which is offset for the as­so­ci­ated work­shop rev­enue; if one ex­cludes this num­ber, our to­tal ex­pen­di­tures in 2014 came to about $510k.
Ba­sic op­er­at­ing expenses
Our ba­sic op­er­at­ing ex­penses from 2014 were fairly similar to 2013: a to­tal of about $42k/​month, out­side-view:
  • $5.3k/​month for office rent;

  • $30k/​month for salaries (in­cludes tax, health in­surance, and con­trac­tors; our full-time peo­ple are still paid $3.5k/​month);

  • $7k/​month for to­tal other non-work­shop costs (flights and fees to at­tend oth­ers’ train­ings; office gro­ceries; stor­age unit, soft­ware sub­scrip­tions; …)

Flag­ship Workshops
We ran 9 work­shops in 2014, which gen­er­ated about $435k in rev­enue, but also $210k in non-staff costs (mostly food and hous­ing for work­shop par­ti­ci­pants), for a to­tal net of about $230k in ad­di­tional money (or $25k/​work­shop in ad­di­tional money), ig­nor­ing staff cost.
Per work­shop staff time-cost is sig­nifi­cantly lower than it was (count­ing sales, pre-work­ing prep, in­struc­tion, and fol­low-ups) -- per­haps 100 per­son-days per work­shop go­ing for­ward, com­pared against per­haps 180 per­son-days per work­shop in 2013. (We aim to de­crease this fur­ther in 2014 while main­tain­ing or in­creas­ing qual­ity.)
Per work­shop net rev­enue is on the other hand roughly similar to 2013; this was based on an in­ten­tional effort to move staff time away from short-term sales to­ward in­vest­ment in longer-term press fun­nel, cur­ricu­lum de­vel­op­ment (e.g., the alumni events), and other shifts to our longer-term sig­nifi­cance.
Alumni re­union, alumni work­shops, alumni dojo...
We ran an alumni re­union, 4 alumni work­shops, and a con­tin­u­ing alumni dojo. We in­ten­tion­ally kept the cost of these low to par­ti­ci­pants, and slid­ing-scale, so as to help build the com­mu­nity that can take the art for­ward.
De­tail:
  • Alumni re­union: $34k in­come; $38k non-staff costs (for ~100 par­ti­ci­pants)

  • Ham­ming: $3.6k rev­enue; $3k non-staff costs

  • As­sist­ing think­ing: $2.1k rev­enue; $3.2k non-staff costs

  • At­ten­tion: $3.3k rev­enue; $2.7k non-staff costs

  • Epistemic Ra­tion­al­ity for Effec­tive Altru­ists: $5k rev­enue; $3k costs

  • Dojo: free.

We also ran a 1.5-day beta work­shop for be­gin­ners:
  • “A taste of ra­tio­nal­ity”: $5k rev­enue; $2.6k non-staff costs.

SPARC
SPARC 2014’s non-staff costs came to $62k, and were cov­ered by Drop­box, Quixey, and MIRI (al­though, as with our other pro­grams, con­sid­er­able CFAR staff time also went into SPARC).
Balance sheet
CFAR has about $130k, go­ing into 2015. (The $30k short-term loan we took last year was re­paid as sched­uled, fol­low­ing last year’s fundrais­ing drive.)
Summary
CFAR is more fi­nan­cially sta­ble than it was a year ago but re­mains de­pen­dent on dona­tion to make ends meet, and still more de­pen­dent on dona­tion if it is to e.g. out­source the ac­count­ing, to fur­ther stream­line the per-work­shop staff time-costs, and to put ac­tual qual­ity fo­cus into de­vel­op­ing the epistemic ra­tio­nal­ity and do-good­ing im­pacts.

The big pic­ture and how you can help

CFAR seems to many of us to be among the efforts most worth in­vest­ing in. This isn’t be­cause our pre­sent work­shops are all that great. Rather, it is be­cause, in terms of “sav­ing throws” one can buy for a hu­man­ity that may be nav­i­gat­ing tricky situ­a­tions in an un­known fu­ture, im­prove­ments to think­ing skill seem to be one of the strongest and most ro­bust. And we sus­pect that CFAR is a promis­ing ker­nel from which to help with that effort.
As noted, we aim in 2015 to get all the way to a “full pro­to­type” -- a point from which we are ac­tu­ally visi­bly helping in the aimed-for way. This will be a tricky spot to get to. Our ex­pe­rience slowly com­ing to grips with epistemic ra­tio­nal­ity is prob­a­bly more rule than the ex­cep­tion, and I sus­pect we’ll run into a num­ber of curve balls on path to the pro­to­type.
But with your help—dona­tions are at this stage crit­i­cal to be­ing able to put se­ri­ous fo­cused effort into build­ing the pro­to­type, in­stead of be­ing ter­ribly dis­tracted stay­ing al­ive—I sus­pect that we can put in the req­ui­site fo­cus, and can have the pro­to­type in hand by the end of 2015.
...
Be­sides dona­tions, we are ac­tu­ally in a good po­si­tion now use your ad­vice, your ex­pe­rience, and your thoughts on how to nav­i­gate CFAR’s re­main­ing gaps; we have enough space to take a breath and think strate­gi­cally.
We’re hop­ing 2015 will also be a year when CFAR alumni and sup­port­ers scale up their con­nec­tions and their am­bi­tions, launch­ing more star­tups and other pro­jects. Please keep in touch if you do this; we’d like our cur­ricu­lum-gen­er­a­tion pro­cess to con­tinue to con­nect to live prob­lems.
A very strong way to help, also, is to come to a work­shop, and to send your friends there. It keeps CFAR go­ing, we always want there to be more CFAR alumni, and it might even help with that quest. (The data strongly in­di­cates that your friends will thank you for get­ting them to come… and will do so even more 6 months later!)
And do please donate to the Win­ter 2014 fundrais­ing drive!

[1] That is: by giv­ing up a dol­lar, you can, given some sim­plifi­ca­tions, cause CFAR to gain two dol­lars. Much thanks to Peter McCluskey, Jesse Lip­trap, Nick Tar­leton, Stephanie Zo­lay­var, Ar­ram Sa­beti, Liron Shapira, Ben Hoskin, Eric Rogstad, Matt Graves, Alyssa Vance, To­pher Hal­lquist, and John Clasby for to­gether putting up $120k in match­ing funds.

[2] This post is a col­lab­o­ra­tive effort by many at CFAR.

[3] The ti­tle we ran it un­der was “TA train­ing”, but the name des­per­ately needs re­vi­sion.

[4] This is miss­ing sev­eral I can al­most-re­call and prob­a­bly sev­eral oth­ers I can’t; please PM me if you re­mem­ber one I missed. Many of the star­tups on this list have mul­ti­ple founders who are CFAR alum. Omit­ted from this list are star­tups that were com­pleted be­fore the alumni met us, e.g. Skype; we in­cluded how­ever star­tups that were founded be­fore folks met us and car­ried on af­ter they be­came alumni (even when we had no causal im­pact on the star­tups). Also of note is that many CFAR alumni are in found­ing or ex­ec­u­tive po­si­tions at EA-as­so­ci­ated non-prof­its, in­clud­ing CEA, CSER, FLI, Lev­er­age, and MIRI. One rea­son we’re happy about this is that it means that the cur­ricu­lum we’re de­vel­op­ing is be­ing de­vel­oped in con­cert with peo­ple who are try­ing to re­ally ac­tu­ally ac­com­plish hard goals, and who are there­fore want­ing more from tech­niques than just “does this sound cool”.

[5] Ideally, such a pro­to­type might ac­com­plish in­creases in (1), (2), and (3) in a man­ner that felt like facets of a sin­gle art, or that all drew upon a com­mon base of sim­pler cog­ni­tive skills (such as sub­skills for get­ting ac­cu­rate be­liefs into sys­tem 1, for nav­i­gat­ing in­ter­nal dis­agree­ment, or for over­com­ing learned hel­pless­ness). A “pro­to­type” would thus also be a product that, when we ap­ply lo­cal op­ti­miza­tion on it, takes us to cur­ricula that are strate­gi­cally im­por­tant to the world—rather than, say, tak­ing us to well-honed “feel in­spired about your life” work­shops, or some­thing).

Rel­a­tive to this ideal, the cur­rent cur­ricu­lum seems to in fact ac­com­plish some of (2), for all that we don’t have RCTs yet; but it is less suc­cess­ful at (1) and (3). (We’d like, even­tu­ally, to scale up (2) as well.) How­ever, we sus­pect the cur­ricu­lum con­tains seeds to­ward an art that can suc­ceed at (1) and (3); and we aim to demon­strate this in 2015.

[6] Apolo­gies for the jar­gon. It is prob­a­bly about time we wrote up a glos­sary; but we don’t have one yet. If you care, you can pick up some of the vo­cab­u­lary from our sam­ple work­shop sched­ule.

[7] This isn’t the de­tailed tac­ti­cal plan; we’ll need one of those sep­a­rately, and we have a par­tial ver­sion that this mar­gin was too small to con­tain; it’s meant to be a list­ing of how you and we can tell whether we won, at the end of 2015.
[8] The Ap­gar score for as­sess­ing new­born health is in­spiring, here; if you’ve not seen it be­fore, and you’re won­der­ing how one could pos­si­bly come up with a met­ric, you might glance at its wikipe­dia page. Ba­si­cally, in­stead of com­ing up with a sin­gle 0 to 10 new­born health scale, Dr. Ap­gar chose 5 sim­pler com­po­nents (new­born color; new­born heart rate; etc.), came up with very sim­ple “0 to 2” mea­sures for these, and then added.