CFAR in 2014: Continuing to climb out of the startup pit, heading toward a full prototype

Summary: We outline CFAR’s purpose, our history in 2014, and our plans heading into 2015.

One of the reasons we’re publishing this review now is that we’ve just launched our annual matching fundraiser, and want to provide the information our prospective donors need for deciding. This is the best time of year to decide to donate to CFAR. Donations up to $120k will be matched until January 31.[1]

To briefly preview: For the first three years of our existence, CFAR mostly focused on getting going. We followed the standard recommendation to build a ‘minimum viable product’, the CFAR workshops, that could test our ideas and generate some revenue. Coming into 2013, we had a workshop that people liked (9.3 average rating on “Are you glad you came?”; a more recent random survey showed 9.6 average rating on the same question 6-24 months later), which helped keep the lights on and gave us articulate, skeptical, serious learners to iterate on. At the same time, the workshops are not everything we would want in a CFAR prototype; it feels like the current core workshop does not stress-test most of our hopes for what CFAR can eventually do. The premise of CFAR is that we should be able to apply the modern understanding of cognition to improve people’s ability to (1) figure out the truth (2) be strategically effective (3) do good in the world. We have dreams of scaling up some particular kinds of sanity. Our next goal is to build the minimum strategic product that more directly justifies CFAR’s claim to be an effective altruist project.[2]

Highlights from 2014

Our brand perception improved significantly in 2014, which matters because it leads to companies being willing to pay for workshop attendance. We were covered in Fast Companytwice—the Wall Street Journal, and The Reasoner. Other mentions include Forbes, Big Think, Boing Boing, and Lifehacker. We’ve also had some interest in potential training for tech companies.

Our curriculum is gaining a second tier in the form of alumni workshops. We tried 4 experimental alumni workshops, 3 of which went well enough to be worth iterating:

  • The Hamming Question: “What are the most important problems in your life, and why aren’t you working on them?” This 2.5-day workshop was extremely well received, and gave rise to a new unit for our introductory workshop.

  • Assisting Others[3]: A two-weekend (training, then practicum) workshop investigating the close link between helping others debug their problems, and better debugging your own problems. We ran a version of this in the Bay Area that worked, and an abridged version in the UK that didn’t. (This was our fault. We’re sorry.)

  • Attention Workshop: A 2.5-day workshop on clearing mental space. This failed and taught us some important points about what doesn’t work.

  • Epistemic Rationality for Effective Altruists: A standalone 2.5-day workshop on applying techniques from the introductory workshop to factual questions, especially those related to effective altruism. (More on this below.) The attendees from this and the Hamming workshop spontaneously organized recurring meetups for themselves.

Our alumni community continues to grow. There are now 550 CFAR alumni, counting 90 from SPARC. It’s a high-initiative group. Startups by CFAR alumni include: Apptimize; Bellroy; Beeminder; Complice; Code Combat; Draftable; MealSquares; OhmData; Praxamed; Vesparum; Teleport; Watu; Wave; ZeroCater.[4] There is a highly active mailing list with over 400 members, and over 600 conversation threads, over 30 of which were active in the last month. We also ran our first-ever alumni reunion, and started a weekly alumni dojo. This enabled further curricular experimentation, and allowed alumni ideas and experiences to feed into curricular design.

SPARC happened again, with more-honed curriculum and nearly twice as many students.

Basic operations improved substantially. We’ll say more on this in section 2.

Iteration on the flagship workshop continues. We’ll say more on this (including details of what we learned, and what remains puzzling) in section 3.

Improving operations

The two driving themes of CFAR during 2014 were making our operations more stable and sustainable, and our successful struggle to pull our introductory workshop out of a local optimum and get back on track toward something that is more like a ‘full prototype’ of the CFAR concept.

At the end of 2013, we had negative $30,000 and had borrowed money to make payroll, placing us in the ‘very early stage, struggling startup’ phase. Almost all of our regular operations, such as scheduling interviews for workshop admissions, were being done by hand. Much of our real progress in 2014 consisted of making things run smoothly and getting past the phase where treading water requires so many weekly hours that nobody has time for anything else. Organizational capital is real, and we had to learn the habit of setting aside time and effort for accumulating it. (In retrospect, we were around a year too slow to enter this phase, although in the very early days it was probably correct to be building everything to throw away.)

A few of the less completely standard lessons we think we learned are as follows:

  • Rank-order busyness, especially if you’re passing up organizational-capital improvement tasks. Think “This is one of the 3 busiest weekends of the year” and not “I’m too busy to do it right now.” This says how large a hit you get from allowing “important but not urgent” to be postponed during times which are at least that busy, and it forces calibration.

  • Even in crunch times, take moments to update. (E.g., do one-sentence journal entries about what just happened /​ ideas for improvement after each Skype call.) The crunchiest moments are often also the most important to optimize, and even a single sentence of thought can give you a lot of the value from continuing to optimize.

  • Use arithmetic to estimate the time/​money/​staff cost of continuing to do Y the usual way, versus optimizing it. If the arithmetic indicates 10X or more savings, do it even if it requires some up-front cost. (No really, actually do the arithmetic.)

We also learned a large number of other standard lessons. As of the end of 2014, we think that basic processes at CFAR have improved substantially. We have several months of runway in the bank account—our finances are still precarious, but at least not negative, and we think they’re on an improving path. Our workshop interviews and follow-up sessions have an online interface for scheduling instead of being done by hand (which frees a rather surprising amount of energy). The workshop instructors are almost entirely not doing workshop ops. Accounting has been streamlined. The office has nutritious food easily available, without the need to quit working when one gets hungry.

CFAR feels like it is out of the very-early-startup stage, and able to start focusing on things other than just staying afloat. We feel sufficiently non-overwhelmed that we can take the highest-value opportunities we run into, rather than having all staff members overcommitted at all times. We have a clearer sense of what CFAR is trying to do; of what our internal decision-making structure is; of what each of our roles is; of the value of building good institutions for recording our heuristic updates; etc. And we have will, momentum, and knowledge with which to continue improving our organizational capital over 2015.

Attempts to go beyond the current workshop and toward the ‘full prototype’ of CFAR: our experience in 2014 and plans for 2015

Where are we spending the dividends from that organizational capital? More ambitious curriculum. Specifically, a “full prototype” of the CFAR aim.

Recall that the premise of CFAR is that we should be able to apply the modern understanding of cognition to improve people’s ability to (1) figure out the truth; (2) be strategically effective; and (3) do good in the world. By a “prototype”, or “minimum strategic product”, we mean a product that actually demonstrates that the above goal is viable (and, thus, that more directly justifies CFAR’s claim to be an effective altruist project). For CFAR, this will probably require meaningfully boosting some fraction of participants along all three axes (epistemic rationality; real-world competence; and tendency to do good in the world). [5]

So that’s our target for 2015. In the rest of this section, we’ll talk about what CFAR did during 2014, go into greater detail on our attempt to build a curriculum for epistemic rationality, and describe our 2015 goals in more detail.


One of the future premises of CFAR is that we can eventually apply the full scientific method to the problem of constructing a rationality curriculum (by measuring variations, counting things, re-testing, etc.) -- we aim to eventually be an evidence-based organization. In our present state this continues to be a lot harder than we would like; and our 2014 workshop, for example, was done via crude “what do you feel you learnt?” surveys and our own gut impressions. The sort of randomized trial we ran in 2012 is extremely expensive for us because it requires randomly not admitting workshop attendees, and we don’t presently have good-enough outcome metrics to justify that expense. Life outcomes, which we see as a gold standard, are big noisy variables with many contributing factors—there’s a lot that adds to or subtracts from your salary besides having attended a CFAR workshop, which means that the randomized tests we can afford to run on life outcomes are underpowered. Testing later ability to perform specific skills doesn’t seem to stress-test the core premise in the same way. In 2014 we continued to track correlational data and did more detailed random followup surveys, but this is just enough to keep such analyses in the set of things we regularly do, and remind ourselves that we are supposed to be doing better science later.

At the start of 2014, we thought our workshops had reached a point of decent order, and we were continuing to tweak them. Partway through 2014 we realized we had reached a local optimum and become stuck (well short of a full prototype /​ minimum strategic product). So then we smashed everything with a hammer and tried:

  • 4 different advanced workshops for alumni:

    • An epistemic rationality workshop for effective altruist alumni;

    • An alumnus workshop on focusing attention (failed);

    • An alumnus workshop on the Hamming Question, “What are your most important life problems? Why aren’t you solving them?”

    • 2 attempts at an alumnus workshop on how to do 1-on-1 teaching /​ assistance of cognitive skills (first succeeded, second failed; our fault).

  • A 1.5-day version of the introductory workshop;

  • A workshop with only 10 participants with the entire class taught in a single room (extremely popular, but not yet scalable);

  • Shorter modules breaking up the 60-minute-unit default;

  • An unconference-style format for the 2014 alumni reunion.

These experiments ended up feeding back into the flagship workshop, and we think we’re now out of the local optimum and making progress again.

Epistemic rationality curriculum

In CFAR’s earliest days, we thought epistemic rationality (figuring out the answers to factual questions) was the main thing we were supposed to teach, and we took some long-suffering volunteers and started testing units on them. Then it turned out that while all of our material was pretty terrible, the epistemic rationality parts were even more terrible compared to the rest of it.

At first our model was that epistemic rationality was hard and we needed to be better teachers, so we set out to learn general teaching skills. People began to visibly enjoy many of our units. But not the units we thought of as “epistemic rationality”. They still visibly suffered through those.

We started to talk about “the curse of epistemic rationality”, and it made us worry about whether it would be worth having a CFAR if we couldn’t resolve it somehow. Figuring out the answers to factual questions, the sort of subject matter that appears in the Sequences, the kind of work that we think of scientists as carrying out, felt to us like it was central to the spirit of rationality. We had a sense (and still do) that if all we could do was teach people how to set up trigger-action systems for remembering to lock their house doors, or even turn an ugh-y feeling of needing to do a job search into a series of concrete actions, this still wouldn’t be making much progress on sanity-requiring challenges over the next decades. We were worried it wouldn’t contribute strategic potential to effective altruism.

So we kept the most essential-feeling epistemic rationality units in the workshop even despite participants’ lowish unit-ratings, and despite our own feeling that those units weren’t “clicking’, and we thought: “Maybe, if we have workshops full of units that people like, we can just make them sit through some units that they don’t like as much, and get people to learn epistemic rationality that way”. The “didn’t like” part was painful no matter what story we stuck on it. We rewrote the Bayes unit from scratch more or less every workshop. All of our “epistemic rationality” units changed radically every month.

One ray of light appeared in mid-2013 with the Inner Simulator unit, which included techniques about imagining future situations to see how surprised you felt by them, and using this to determine whether your Inner Simulator really strongly expected a new hire to work out or whether you are in fact certain that your project will be done by Thursday. This was something we considered to be an “epistemic rationality” unit at the time, and it worked, in the sense that it (a) set up concepts that fed into our other units, (b) seemed to actually convey some useful skills that people noticed they were learning, and (c) people didn’t hate it.

(And it didn’t feel like we were just trying to smuggle it in from ulterior motives about skills we thought effective altruists ought to have, but that we were actually patching concrete problems.)

A miracle had appeared! We ignored it and kept rewriting all the other “epistemic rationality” units every month.

But a lesson that we only understood later started to seep in. We started thinking of some of our other units as having epistemic rationality components in them—and this in turn changed the way we practiced, and taught, the other techniques.

The sea change that occurred in our thinking might be summarized as the shift from, “Epistemic rationality is about whole units that are about answering factual questions” to there being a truth element that appears in many skills, a point where you would like your System 1 or System 2 to see some particular fact as true, or figure out what is true, or resolve an argument about what will happen next.

  • We used to think of Comfort Zone Expansion[6] as being about desensitization. We would today think of it as being about, for example, correcting your System 1′s anticipation of what happens when you talk to strangers.

  • We used to think of Urge Propagation[6] as being about applying behaviorist conditioning techniques to yourself. Today we teach a very different technique under the same name; a technique that is about dialoging with your affective brain until system 1 and system 2 acquire a common causal model of whether task X will in fact help with the things you most care about.

  • We thought of Turbocharging[6] as being about instrumental techniques for acquiring skills quickly through practice. Today we would also frame it as, “Suppose you didn’t know you were supposed to be ‘Learning Spanish’. What would an outside-ish view say about what skill you might be practicing? Is it filling in blank lines in workbooks?”

  • We were quite cheered when we tried entirely eliminating the Bayes unit and found that we could identify a dependency in other, clearly practical, units that wanted to call on the ability to look for evidence or identify evidence.

  • Our Focused Grit and Hard Decisions units are entirely “epistemic”—they are straight out just about acquiring more accurate models of the world. But they don’t feel like the old “curse of epistemic rationality” units, because they begin with an actual felt System 1 need (“what shall I do when I graduate?” or similar), and they stay in contact with System 1′s reasoning process all the way through.

When we were organizing the UK workshop at the end of 2014, there was a moment where we had the sudden realization, “Hey, maybe almost all of our curriculum is secretly epistemic rationality and we can organize it into ‘Epistemic Rationality for the Planning Brain’ on day 1 and ‘Epistemic Rationality for the Affective Brain’ on day 2, and this makes our curriculum so much denser that we’ll have room for the Hamming Question on day 3.” This didn’t work as well in practice as it did in our heads (though it still went over okay) but we think this just means that the process of our digesting this insight is ongoing.

We have hopes of making a lot of progress here in 2015. It feels like we’re back on track to teaching epistemic rationality—in ways where it’s forced by need to usefully tackle life problems, not because we tacked it on. And this in turn feels like we’re back on track toward teaching that important thing we wanted to teach, the one with strategic implications containing most of CFAR’s expected future value.

(And the units we think of as “epistemic” no longer get rated lower than all our other units; and our alumni workshop on Epistemic Rationality for Effective Altruists went over very well and does seem to have helped validate the propositions that “People who care strongly about EA’s factual questions are good audiences for what we think of as relevant epistemic skills” and “Having learned CFAR basics actually does help for learning more abstract epistemic rationality later”.)

Goals for 2015

In 2015, we intend to keep building organizational capital, and use those dividends to keep pushing on the epistemic rationality curriculum, and pushing toward the minimum strategic project that stress-tests CFAR’s core value propositions. We’ve also set the following concrete goals[7]:

  • Find some way to track a metric for ‘How likely we think this person is to end up being strategically useful to the world’, even if it’s extremely crude.[8]

  • Actually start tracking it, even if internally, subjectively, and terribly.

  • Try to boost alumni scores on the three components of “Figure out true things”, “Be effective” and “Do-gooding” (from our extremely crude measure).

  • Cause 30 new people to become engaged in high-impact do-gooding in some interesting way, including 10+ with outside high status and no previous involvement with EA.

  • Cause 10 high-impact do-gooder alumni to say that, because of interacting with CFAR, they became much more skilled/​effective/​well-targeted on strategically important things. Have this also be plausible to their coworkers.

Nuts, Bolts, and Financial Details

Total expenditures
Our total expenditures in 2014 came up about $840k. This number includes about $330k of non-staff direct workshop costs (housing, food, etc.), which is offset for the associated workshop revenue; if one excludes this number, our total expenditures in 2014 came to about $510k.
Basic operating expenses
Our basic operating expenses from 2014 were fairly similar to 2013: a total of about $42k/​month, outside-view:
  • $5.3k/​month for office rent;

  • $30k/​month for salaries (includes tax, health insurance, and contractors; our full-time people are still paid $3.5k/​month);

  • $7k/​month for total other non-workshop costs (flights and fees to attend others’ trainings; office groceries; storage unit, software subscriptions; …)

Flagship Workshops
We ran 9 workshops in 2014, which generated about $435k in revenue, but also $210k in non-staff costs (mostly food and housing for workshop participants), for a total net of about $230k in additional money (or $25k/​workshop in additional money), ignoring staff cost.
Per workshop staff time-cost is significantly lower than it was (counting sales, pre-working prep, instruction, and follow-ups) -- perhaps 100 person-days per workshop going forward, compared against perhaps 180 person-days per workshop in 2013. (We aim to decrease this further in 2014 while maintaining or increasing quality.)
Per workshop net revenue is on the other hand roughly similar to 2013; this was based on an intentional effort to move staff time away from short-term sales toward investment in longer-term press funnel, curriculum development (e.g., the alumni events), and other shifts to our longer-term significance.
Alumni reunion, alumni workshops, alumni dojo...
We ran an alumni reunion, 4 alumni workshops, and a continuing alumni dojo. We intentionally kept the cost of these low to participants, and sliding-scale, so as to help build the community that can take the art forward.
  • Alumni reunion: $34k income; $38k non-staff costs (for ~100 participants)

  • Hamming: $3.6k revenue; $3k non-staff costs

  • Assisting thinking: $2.1k revenue; $3.2k non-staff costs

  • Attention: $3.3k revenue; $2.7k non-staff costs

  • Epistemic Rationality for Effective Altruists: $5k revenue; $3k costs

  • Dojo: free.

We also ran a 1.5-day beta workshop for beginners:
  • “A taste of rationality”: $5k revenue; $2.6k non-staff costs.

SPARC 2014’s non-staff costs came to $62k, and were covered by Dropbox, Quixey, and MIRI (although, as with our other programs, considerable CFAR staff time also went into SPARC).
Balance sheet
CFAR has about $130k, going into 2015. (The $30k short-term loan we took last year was repaid as scheduled, following last year’s fundraising drive.)
CFAR is more financially stable than it was a year ago but remains dependent on donation to make ends meet, and still more dependent on donation if it is to e.g. outsource the accounting, to further streamline the per-workshop staff time-costs, and to put actual quality focus into developing the epistemic rationality and do-gooding impacts.

The big picture and how you can help

CFAR seems to many of us to be among the efforts most worth investing in. This isn’t because our present workshops are all that great. Rather, it is because, in terms of “saving throws” one can buy for a humanity that may be navigating tricky situations in an unknown future, improvements to thinking skill seem to be one of the strongest and most robust. And we suspect that CFAR is a promising kernel from which to help with that effort.
As noted, we aim in 2015 to get all the way to a “full prototype” -- a point from which we are actually visibly helping in the aimed-for way. This will be a tricky spot to get to. Our experience slowly coming to grips with epistemic rationality is probably more rule than the exception, and I suspect we’ll run into a number of curve balls on path to the prototype.
But with your help—donations are at this stage critical to being able to put serious focused effort into building the prototype, instead of being terribly distracted staying alive—I suspect that we can put in the requisite focus, and can have the prototype in hand by the end of 2015.
Besides donations, we are actually in a good position now use your advice, your experience, and your thoughts on how to navigate CFAR’s remaining gaps; we have enough space to take a breath and think strategically.
We’re hoping 2015 will also be a year when CFAR alumni and supporters scale up their connections and their ambitions, launching more startups and other projects. Please keep in touch if you do this; we’d like our curriculum-generation process to continue to connect to live problems.
A very strong way to help, also, is to come to a workshop, and to send your friends there. It keeps CFAR going, we always want there to be more CFAR alumni, and it might even help with that quest. (The data strongly indicates that your friends will thank you for getting them to come… and will do so even more 6 months later!)
And do please donate to the Winter 2014 fundraising drive!

[1] That is: by giving up a dollar, you can, given some simplifications, cause CFAR to gain two dollars. Much thanks to Peter McCluskey, Jesse Liptrap, Nick Tarleton, Stephanie Zolayvar, Arram Sabeti, Liron Shapira, Ben Hoskin, Eric Rogstad, Matt Graves, Alyssa Vance, Topher Hallquist, and John Clasby for together putting up $120k in matching funds.

[2] This post is a collaborative effort by many at CFAR.

[3] The title we ran it under was “TA training”, but the name desperately needs revision.

[4] This is missing several I can almost-recall and probably several others I can’t; please PM me if you remember one I missed. Many of the startups on this list have multiple founders who are CFAR alum. Omitted from this list are startups that were completed before the alumni met us, e.g. Skype; we included however startups that were founded before folks met us and carried on after they became alumni (even when we had no causal impact on the startups). Also of note is that many CFAR alumni are in founding or executive positions at EA-associated non-profits, including CEA, CSER, FLI, Leverage, and MIRI. One reason we’re happy about this is that it means that the curriculum we’re developing is being developed in concert with people who are trying to really actually accomplish hard goals, and who are therefore wanting more from techniques than just “does this sound cool”.

[5] Ideally, such a prototype might accomplish increases in (1), (2), and (3) in a manner that felt like facets of a single art, or that all drew upon a common base of simpler cognitive skills (such as subskills for getting accurate beliefs into system 1, for navigating internal disagreement, or for overcoming learned helplessness). A “prototype” would thus also be a product that, when we apply local optimization on it, takes us to curricula that are strategically important to the world—rather than, say, taking us to well-honed “feel inspired about your life” workshops, or something).

Relative to this ideal, the current curriculum seems to in fact accomplish some of (2), for all that we don’t have RCTs yet; but it is less successful at (1) and (3). (We’d like, eventually, to scale up (2) as well.) However, we suspect the curriculum contains seeds toward an art that can succeed at (1) and (3); and we aim to demonstrate this in 2015.

[6] Apologies for the jargon. It is probably about time we wrote up a glossary; but we don’t have one yet. If you care, you can pick up some of the vocabulary from our sample workshop schedule.

[7] This isn’t the detailed tactical plan; we’ll need one of those separately, and we have a partial version that this margin was too small to contain; it’s meant to be a listing of how you and we can tell whether we won, at the end of 2015.
[8] The Apgar score for assessing newborn health is inspiring, here; if you’ve not seen it before, and you’re wondering how one could possibly come up with a metric, you might glance at its wikipedia page. Basically, instead of coming up with a single 0 to 10 newborn health scale, Dr. Apgar chose 5 simpler components (newborn color; newborn heart rate; etc.), came up with very simple “0 to 2” measures for these, and then added.