Why CFAR?

Summary: We outline the case for CFAR, including:

CFAR is in the middle of our annual matching fundraiser right now. If you’ve been thinking of donating to CFAR, now is the best time to decide for probably at least half a year. Donations up to $150,000 will be matched until January 31st; and Matt Wage, who is matching the last $50,000 of donations, has vowed not to donate unless matched.[1]

Our workshops are cash-flow positive, and subsidize our basic operations (you are not subsidizing workshop attendees). But we can’t yet run workshops often enough to fully cover our core operations. We also need to do more formal experiments, and we want to create free and low-cost curriculum with far broader reach than the current workshops. Donations are needed to keep the lights on at CFAR, fund free programs like the Summer Program on Applied Rationality and Cognition, and let us do new and interesting things in 2014 (see below, at length).[2]

Our long-term goal

CFAR’s long-term goal is to create people who can and will solve important problems—whatever the important problems turn out to be.[3]

We therefore aim to create a community with three key properties:

  1. Competence—The ability to get things done in the real world. For example, the ability to work hard, follow through on plans, push past your fears, navigate social situations, organize teams of people, start and run successful businesses, etc.

  2. Epistemic rationality—The ability to form relatively accurate beliefs. Especially the ability to form such beliefs in cases where data is limited, motivated cognition is tempting, or the conventional wisdom is incorrect.

  3. Do-gooding—A desire to make the world better for all its people; the tendency to jump in and start/​assist projects that might help (whether by labor or by donation); and ambition in keeping an eye out for projects that might help a lot and not just a little.

Why competence, epistemic rationality, and do-gooding?
To change the world, we’ll need to be able to take effective action (competence). We’ll need to be able to form a good implicit and explicit understanding of the human world and how to shift it. We’ll need to have the best shot we can get at modeling situations yet unseen. We’ll need to solve problems outside the realms where competent business people already find traction (all of which require competence plus epistemic rationality). And we’ll need to blend these abilities with a burning ambition to leave the world far better than we found it (competence plus epistemic rationality plus do-gooding).
And we’ll need a community, not just a set of individuals. It is hard for an isolated individual to figure out what the most important problems are, let alone how to effectively solve them. This is still harder for individuals who have interesting day jobs, and who are busy amassing real-world competence of varied sorts. Communities can assemble a complex world-model piece by piece. Communities can build and sustain motivation, as well, and facilitate the practice and transfer of useful skills. The aim is thus to create a community that, collectively, can figure out what needs doing and can then do it—even when this requires multiple simultaneous competencies (e.g., locating a particular existential risk, and having good scientific connections, and knowing good folks in policy, and knowing how to do good technical research).
We intend to build that sort of community.

Our plan, and our progress to date

How can we create a community with high levels of competence, epistemic rationality, and do-gooding? By creating curricula that teach (or enhance) these properties; by seeding the community with diverse competencies and diverse perspectives on how to do good; and by linking people together into the right kind of community.

We’ve now had two years to execute on this vision.[4] It’s not a lot of time, but it’s enough to get started; and it’s enough that folks should already be able to update as to our ability to execute.
Here’s our current working plan, the progress we’ve made so far, and the pieces we still need to hit.

Curriculum design

In October 2012, we had no money and little visible means of obtaining more.[5] We needed runway; and we needed a way to use that runway to rapidly iterate curriculum.
We therefore focused our initial efforts into making a workshop that could pay its own bills, and at the same time give us data—a workshop that would give us the opportunity to run (and learn from) many further workshops. Our applied rationality workshops have filled this role.

Progress to date

Reported benefits
After about a dozen workshops (and over 100 classes that we’ve designed and tested), we’ve settled on a workshop model that runs smoothly, and seems to provide value to our participants, who report a mean of 9.3 out of 10 to the question “Are you glad you came?”. In the process we’ve substantially improved our skill at curriculum design: it used to take us about 40 hours to design a unit we regarded as decent (design; test on volunteers; re-design; test on volunteers; etc). It now takes us about 8 hours to design a unit of the same quality.[6]
Anecdotally, we have many, many stories from alumni about how our workshop increased their competence (both generally and for altruistic ends). For example, alum Ben Toner, CEO of Draftable, recounts that after the July 2012 workshop, “At work, I realized I wasn’t doing anywhere near enough planning. My employees were spending time on the wrong things because I hadn’t planned things out in enough detail to make it clear what was the most important thing to do next. I fixed this immediately after the camp.” Alum Ben Kuhn has described how the CFAR workshop helped his effective altruism group “vastly increase our campus presence—everything from making uncomfortable cold calls to powering through bureaucracy, and from running complex events to quickly updating on feedback.” (Check out our testimonials page for more examples.)
Measurement
Anecdata notwithstanding, the jury is still out regarding the workshops’ usefulness to those who come. During the very first minicamps (the current workshops are agreed to be better) we randomized admission of 15 applicants, with 17 controls. Our study was low-powered and effects on e.g. income would have needed to be very large for us to expect to detect them. Still, we ended up with non-negligible evidence of absence: income, happiness, and exercise did not visibly trend upward one year later. We detected statistically significant positive impacts on the standard (BFI-10) survey pair for emotional stability “I see myself as someone who is relaxed, handles stress well” /​ “I get nervous easily” (p=.002). Also significant were effects on an abridged General Self-Efficacy Scale (sample item:”I can solve most problems if I invest the necessary effort”) (p=.007). The details will be available soon on our blog (including a much larger number of negative results). We’ll run another RCT soon, funding permitting.
Like many participants, we at CFAR have the subjective impression that the workshops boost strategicness; and, like most who have observed two workshops, we have the impression that today’s workshops are much better than those in the initial RCT. We’ll need to find ways to actually test those impressions, and to create stronger feedbacks from measurement into curriculum development.
Epistemic rationality curricula
After a rocky start, our epistemic rationality curriculum has seen a number of recent victories. Our “Building Bayesian Habits” class began performing much better after we figured out how to help people notice their intuitive, “System 1″ expectations of probabilities.[7] Our “inner simulator” class conveys the distinction between profession and anticipation while aiming at immediate, practical benefits; it isn’t about religion and politics, it’s about whether your mother will actually enjoy the potted plant you’re thinking of giving her. More generally, the epistemic rationality curriculum appears to be integrating deeply with the competence curriculum, and appears to be becoming more appealing to participants as it does so. Strengthening this curriculum, and building in real tests of its efficacy, will be a major focus in 2014.
Integrating with academic research
We made preliminary efforts in this direction—for example by taking standard questionnaires from the academic literature, including Stanovich’s indicators of the traits he calls “rationality”, and administering them to attendees at a Less Wrong meetup. (We found that meetup attendees scored near the ceiling, so we’ll probably need new questionnaires with better discrimination.) Our research fellow, Dan Keys (whose masters thesis was on heuristics and biases), spends a majority of his time keeping up with the literature and integrating it with CFAR workshops, as well as designing tests for our ongoing forays into randomized controlled trials. We’re particularly excited by Tetlock’s Good Judgment Project, and we’ll be piggybacking on it a bit to see if we can get decent ratings.
Accessibility
Initial workshops worked only for those who had already read the LW Sequences. Today, workshop participants who are smart and analytical, but with no prior exposure to rationality—such as a local politician, a police officer, a Spanish teacher, and others—are by and large quite happy with the workshop and feel it is valuable.
Nevertheless, the total set of people who can travel to a 4.5-day immersive workshop, and who can spend $3900 to do so, is limited. We want to eventually give a substantial skill-boost in a less expensive, more accessible format; we are slowly bootstrapping toward this.
Specifically:
  • Shorter workshops: We’re working on shorter versions of our workshops (including three-hour and one-day courses) that can be given to larger sets of people at lower cost.

  • College courses: We helped develop a course on rational thinking—for UC Berkeley undergraduates, in partnership with Nobel Laureate Saul Perlmutter. We also brought several high school and university instructors to our workshop, to help seed early experimentation into their curricula.

  • Increasing visibility: We’ve been working on increasing our visibility among the general public, with alumni James Miller and Tim Czech both working on non-fiction books that feature CFAR, and several mainstream media articles about CFAR on their way, including one forthcoming shortly in the Wall Street Journal.

    Next steps

    In 2014, we’ll be devoting more resources to epistemic curriculum development; to research measuring the effects of our curriculum on both competence and epistemic rationality; and to more widely accessible curricula.

    Forging community

    The most powerful interventions are not one-off experiences; rather, they are the start of an ongoing practice. Changing one’s social environment is one of the highest impact ways to create personal change. Alum Paul Crowley writes that “The most valuable lasting thing I got out of attending, I think, is a renewed determination to continually up my game. A big part of that is that the minicamp creates a lasting community of fellow alumni who are also trying for the biggest bite of increased utility they can get, and that’s no accident.”
    The goal is to create a community that is directly helpful for its members, and that simultaneously improves its members’ impact on the world.

    Progress to date

    A strong set of seed alumni
    We have roughly 350 alumni so far, which include scientists from MIT and Berkeley, college students, engineers from Google and Facebook, founders of Y-combinator startups, teachers, professional writers, and the exceptionally gifted high-school students who participated in SPARC 2013 and 2012. (Not counted in that tally are the 50-some attendees of the 2013 Effective Altruism Summit, for whom we ran a free, abridged version of our workshop.)
    Alumni contact/​community
    There is an active alumni Google group, which gets daily traffic. Alumni use it to share useful life hacks they’ve discovered, help each other trouble-shoot, and notify each other of upcoming events and opportunities. We’ve also been using our post-workshop parties as reunions for alumni nearby (in the San Francisco Bay area, the New York City area, and—in two months—Melbourne, Australia).
    In large part thanks to our alumni forum and the post-workshop party networking, there have already been numerous cases of alumni helping each other find jobs and collaborating on startups or other projects. There have also been several alumni recruited to do-gooding projects (e.g., MIRI and Leverage Research have engaged multiple alumni), and of alumni improving their “earn to give” ability or shifting their own do-gooding strategy.
    Many alumni also take CFAR skills back to Less Wrong meet-ups or other local communities (for example, the effective-altruism meetup in Melbourne, a homeless youth shelter in Oregon, and a self-improvement group in NYC; many have also practiced in their start-ups and with co-workers (for example, Beeminder, MetaMed, and Aquahug)).
    Do-gooding diversity
    We’d like the alumni community to have an accurate picture of how to effectively improve the world. We don’t want to try to figure out how to improve the world all from scratch. There are already a number of groups who’ve done a lot of good thinking on the subject; including some who call themselves “effective altruists”, but also people who call themselves “social entrepreneurs”, “x-risk minimizers”, and “philanthropic foundations”.
    We aim to bring in the best thinkers and doers from all of these groups to seed the community with diverse good ideas on the subject. The goal is to create a culture rich enough that the alumni, as a community, can overcome any errors in CFAR’s founders’ perspectives. The goal is also to create a community that is defined by its pursuit of true beliefs, and that is not defined by any particular preconceptions as to what those beliefs are.
    We use applicants’ inclination to do good as a major criterion of financial aid. Recipients of our informally-dubbed “altruism scholarships” have included members of the Future of Humanity Institute, CEA, Giving What We Can, MIRI, and Leverage Research. They also include many college or graduate students who have no official EA affiliation, but who are passionate about their desire to devote their career to world-saving (and who hope the workshops can help them figure out how to do so). And they include folks who are working full-time on varied do-gooding projects of broader origin, such as social entrepreneurs, someone working on community policing, and folks working at a major philanthropic foundation.
    International outreach
    We’ll be running our first international workshop in Australia, in February 2014, thanks to alumni Matt and Andrew Fallshaw.
    Also, starting in 2014, we’ll be bringing about 20 Estonian math and science award-winners per year to CFAR workshops, thanks to a 5-year pledge from Jaan Tallinn to sponsor workshop spots for leading students from his home country. Estonia is an EU member country with a population of 1.2 million and a high-technology economy, and going forward this might be the first opportunity to check whether there are network effects in relatively larger fractions of a stratum.

    Next steps

    Over 2014, a major focus will be improving opportunities for ongoing alumni involvement. If funding allows, we’ll also try our hand at pilot activities for meet-ups.
    Specific plans include:
    • A two-day “Epistemic Rationality and EA” mini-workshop in January, targeted at alumni

    • An alumni reunion this summer (which will be a multi-day event drawing folks our entire worldwide alumni community, unlike the alumni parties at each workshop);

    • An alumni directory, as an attempt to increase business and philanthropic partnerships among alumni.

    Financials

    Expenses

    Our fixed expenses come to about $40k per month. In some detail:
    • About $7k for our office space

    • About $3k for miscellaneous expenses

    • About $30k for salary & wages, going forward

      • We have five full-time people on salary, each getting $3.5k per month gross. The employer portion of taxes adds roughly an additional $1k/​month per employee.

      • The remaining $7k or so goes to hourly employees and contractors. We have two roughly full-time hourly employees, and a few contractors who do website adjustment and maintenance, workbook compilation for a workshop, and similarly targeted tasks.

    In addition to our fixed expenses, we chose to run SPARC 2013, even though it would cause us to run out of money right around the end-of-year fundraising drive. We did so because we judged SPARC to be potentially very important[8], enough to justify the risk of leaning on this winter fundraiser to continue. All told, SPARC cost approximately $50k in direct costs (not counting staff time).
    (We also chose to e.g. teach at the EA Summit, do rationality research, put some effort into curricula that can be delivered cheaply to a larger crowd, etc. These did not incur much direct expense, but did require staff time which could otherwise have been directed towards revenue-producing projects.)

    Revenue

    Workshops are our primary source of non-donation income. We ran 7 of them in 2013, and they became increasingly cash-positive through the year. We now expect a full 4-day workshop held in the Bay Area to give us a profit of about $25k (ignoring fixed costs, such as staff time and office rent), which is just under 3 weeks of CFAR runway. Demand isn’t yet reliable enough to let us run them at that frequency. We’ve made significant traction on building interest outside of the Less Wrong community, but there’s still work to be done here, and that work will take time. In the meantime, workshops can subsidize some of our non-workshop activities, but not all of them. (Your donations do not go to subsidize workshops!)
    We’re also actively exploring revenue models other than the four-day workshop. Several of them look promising, but need time to come to fruition before the income they offer us is relevant.

    Donations

    CFAR received $166k in our previous fundraising drive at the start of 2013, and a smaller amount of donations spread across the rest of the year. SPARC was partially sponsored with $15k from Dropbox and $5k from Quixey. These donations subsidized SPARC, the rationality workshop at the EA summit, research and development, and core expenses and salary.

    Savings and debt

    Right now CFAR has essentially no savings. The savings we accumulated by the end of 2012 went to (a) feeding the gap between income and expenses and (b) funding SPARC.
    A $30k loan, which helped us cover core 2013 expenses, comes due in March 2014.

    Summary

    If this winter fundraiser goes well, it will give us time to make some of our current experimental products mature. We think we have an excellent shot at making major strides forward in CFAR’s mission as well as becoming much more self-sustaining during 2014.
    If this winter fundraiser goes poorly, CFAR will not yet have sufficient funding to continue core operations.

    How you can help

    Our main goals in 2014:

    1. Building a scalable revenue base, including via ramping up our workshop quality, workshop variety, and our marketing reach.

    2. Community-building, including an alumni reunion.

    3. Creating more connections with the effective altruism community, and other opportunities for our alumni to get involved in do-gooding.

    4. Research to feed back into our curriculum—on the effectiveness of particular rationality techniques, as well as the long-term impact of rationality training on meaningful life outcomes.

    5. Developing more classes on epistemic rationality.

    The three most important ways you can help:
    1. Donations
    If you’re considering donating but want to learn more about how CFAR uses money, or you have other questions or hesitations, let us know—we’d be more than happy to chat with you via Skype. You can sign up for a one-on-one call with Anna here.
    2. Talent
    We’re actively seeking a new director of operations to organize our workshops; good operations can be a great multiplier on CFAR’s total ability to get things done. We are continuing to try out exceptional candidates for a curriculum designer.[9] And we always need more volunteers to help out with alpha-testing new classes in Berkeley, and to participate in online experiments.
    3. Participants
    We’re continually searching for additional awesome people for our workshops. This really is a high-impact way people can help us; and we do have a large amount of data suggesting that (you /​your friends) will be glad to have come. You can apply here—it takes 1 minute, and leads to a conversation with Anna or Kenzi, which (you’ll /​ they’ll) probably find interesting whether or not they choose to come.
    Like the open-source movement, applied rationality will be the product of thousands of individuals’ contributions. The ideas we’ve come up with so far are only a beginning. If you have other suggestions for people we should meet, other workshops we should attend, ways to branch out from our current business model, or anything else—get in touch, we’d love to Skype with you.
    You can also be a part of open-source applied rationality by creating good content for Less Wrong. Some of our best workshop participants, volunteers, hires, ideas for rationality techniques, use cases, and general inspiration have come from Less Wrong. Help keep the LW community vibrant and growing.
    And, if you’re willing—do consider donating now.

    Footnotes

    [1] That is: by giving up a dollar, you can, given some simplifications, cause CFAR to gain two dollars. Much thanks to Matt Wage, Peter McCluskey, Benjamin Hoffman, Janos Kramar & Victoria Krakovna, Liron Shapira, Satvik Beri, Kevin Harrington, Jonathan Weissman, and Ted Suzman for together putting up $150k in matching funds. (Matt Wage, as mentioned, promises not only that he will donate if the pledge is matched, but also that he won’t donate the $50k of matching funds to CFAR if the pledge isn’t filled—so your donation probably really does cause matching at the margin.)
    [2] This post was result of a collaborative effort between Anna Salamon, Kenzi Amodei, Julia Galef, and “Valentine” Michael Smith—like many of our endeavors at CFAR, it went through many iterations, in many hands, to create an overall whole where the credit due is difficult to tease apart.
    [3] In the broadest sense, CFAR can be seen as a cognitive branch of effective altruism—making a marginal improvement to thinking where thinking matters a lot. MIRI did not gain traction until it began to include explicit rationality in its message—maybe because thinking about AI puts heavy loads on particular cognitive skills, though there are other hypotheses. Other branches of effective altruism may encounter their own problems with a heavy cognitive load. Effective altruism is limited in its growth by the supply of competent people who want to quantify the amount of good they do.
    It has been true over the course of human history that improvements in world welfare have often been tied to improvements in explicit thinking skills, most notably with the invention of science. Even for someone who doesn’t think that existential risk is the right place to look, trying to invest more in good reasoning, qua good reasoning—doubling down on the huge benefits which explicit cognitive skills have already brought humanity—is a plausible candidate for the highest-impact marginal altruism.
    [4] That is, we’ve had two years since our barest beginnings, when Anna, Julia, and Val began working together under the auspices of MIRI; and just over a year as a financially and legally independent organization.
    [5] Our pilot minicamps, prior to that October, gave us valuable data/​iteration; but they did not pay for their own direct (room and board) costs, let alone for the staff time required.
    [6] I’m estimating quality by workshop participants’ feedback, here; it takes many fewer hours now for our instructors to create units that receive the same participant ratings as some older unit that hasn’t been revised (we did this accidental experiment several times). Unsurprisingly, large quantities of unit-design practice, with rapid iteration and feedback, were key to improving our curriculum design skills.
    [7] Interestingly, we threw away over a dozen versions of the Bayes class before we developed this one. It has proven somewhat easier to create curricula around strategicness, and around productivity/​effectiveness more generally, than around epistemic rationality. The reason for the relative difficulty appears to be two-fold. First, it is somewhat harder to create a felt need for epistemic rationality skills, at least among those who aren’t working on gnarly, data-sparse problems such as existential risk. Second, there is more existing material on strategicness than on epistemic rationality; and it is in general harder to create from scratch than to create with borrowing. Nevertheless, we have, via much iteration, had some significant successes, including Bayes, separating professed beliefs from anticipated ones, and with certain subskills of avoiding motivated cognition (e.g. noticing curiosity; noticing and tuning in to mental flinches). Better yet, there seems to be a pattern to these successes which we are gradually getting the hang of.
    We’re excited that Ben Hoffman has pledged $23k of funding specifically to enable us to improve our epistemic rationality curriculum and our research plan.
    [8] From the perspective of long-term, high-impact altruism, highly math-talented people are especially worth impacting for a number of reasons. For one thing, if AI does turn out to pose significant risks over the coming century, there’s a significant chance that at least one key figure in the eventual development of AI will have had amazing math tests in high school, judging from the history of past such achievements. An eventual scaled-up SPARC program, including math talent from all over the world, may be able to help that unknown future scientist build the competencies he or she will need to navigate that situation well.
    More broadly, math talent may be relevant to other technological breakthroughs over the coming century; and tech shifts have historically impacted human well-being quite a lot relative to the political issues of any given day.
    [9] To those who’ve already applied: Thanks very much for applying; and our apologies for not getting back to you so far. If the funding drive is filled (so that we can afford to possibly hire someone new), we’ll be looking through the applications shortly after the drive completes and will get back to you then.