What’s going on at CFAR? (Updates and Fundraiser)

This post is the main part of a sequence of year-end efforts to invite real conversation about CFAR, published to coincide with our fundraiser.

Introduction /​ What’s up with this post

My main aim with this post is to have a real conversation about aCFAR[1] that helps us be situated within a community that (after this conversation) knows us. My idea for how to do this is to show you guys a bunch of pieces of how we’re approaching things, in enough detail to let you kibitz.[2]

My secondary aim, which I also care about, is to see if some of you wish to donate, once you understand who we are and what we’re doing. (Some of you may wish to skip to the donation section.)

Since this post is aimed partly at letting you kibitz on our process, it’s long.[3] Compared to most fundraiser posts, it’s also a bit unusually structured. Please feel free to skip around, and to participate in the comment thread after reading only whatever (maybe tiny) pieces interest you.

I’d like CFAR to live in a community

I’d like CFAR to live in a community where:

  • People can see aCFAR

  • We can see you guys seeing us

  • Folks are sharing what they’re seeing, not what their theory says they should see

  • Interested folks within LessWrong, and within the CFAR alumni community, can benefit from the experience we gather as we try things and collide with reality. Our failures and fizzles aren’t opaque (they have moving parts), and our successes can be built on by others

  • You guys can tell us what we’re missing and help us do cooler experiments

  • We are all aware in common knowledge that aCFAR is one group among many. We all know together that other groups already have norms and customs and their own local territories. Both we and you guys can track where we are having good or bad impacts on the spaces around us; it’s easier to be a good neighbor

In the past, CFAR didn’t know how to live in a community in this way (partly because I was often in charge, and I didn’t know how to do it). But I think CFAR and I now have the ability to do this.

As an example of the gap: I used to be somehow trying to claim that we were running our organization in the best, most EA-efficient, or most rational way. As a result, whenever someone argued in public that some revised action would be better, I thought I had to either:

  • Change what I was doing (costly, in cases where I had a multi-step plan they weren’t tracking or knew something they didn’t)

  • Refute them (also costly; requires transferring context and inferential distance, plus even then they might not be convinced but I still wanted to find out how my thing would go)

  • Arrange things (for next time) so people like them don’t say things like that, e.g. by withholding information about our workings so folks can’t critique our plans

But now, it’s different. We are visibly a particular organization, believed in by particular people, with details. The premises we believe in together (aka our operational premises for what we CFAR staff are building) are separated out from our epistemics, and from claims about what’s objectively best.

Anyhow: requesting community membership of this sort for CFAR, and setting you guys up to have a free and full conversation about CFAR, is the main business of this post, and is the main thing I’m trying to ask of you, Dear Reader, if you are interested and able.

Kibitzing requests

Some kinds of kibitzing I’d particularly appreciate:

  • Make it easy to see CFAR through your eyes. (Did we help you? Harm you? Do we look like random people nattering about nothing? Do we seem hopelessly blind? Do we make life more relaxing for you somehow? Do you care what happens with CFAR, one way or another?)

  • Ask questions

  • Flag where something doesn’t make sense to you /​ where you notice confusion

  • Guess how we might get unstuck in places we know we’re stuck

  • Guess what our blind spots are, and what experiments might make stuff more obvious to us in places we haven’t realized we’re stuck

  • Help make the real causes-of-things visible to someone who is young or is coming from outside these communities, as in Sarah’s point #6

  • Hope for something out loud

  • Try to speak to why you care rather than rounding to the nearest conceptual category.

Introductions: Me, aCFAR… and you, Reader?

I’ll start the introductions.

I’m Anna Salamon. I spent my childhood studying… not so much math, although also that, but mostly studying the process by which I or others learned math.[4]

I feel like a bit of a war veteran around the rationality/​AI risk world, as I think are many of the old-timers. I joined the AI x-risk scene in 2008 because there were appallingly few people working on AI x-risk at the time (maybe five full-time equivalents, with those hours spread across maybe twenty people). I, like many at the time, worked really really hard while feeling isolated from almost everyone for whom AI risk somehow couldn’t register, who we had to save without them getting it. I felt a strong, utilitarian trust with the others working on x-risk.

From 2012-2020 I worked really hard on CFAR (initially at Eliezer’s suggestion) to provide a community where people working on AI risk could be less alienated from our surroundings. Then, I changed my mind about something hard to articulate about what kind of “organizations” had any shot at making things better. Now, I’m hoping again to do aCFAR, differently.

I’ll also try introducing aCFAR as though it’s a particular person with a history:

Reader, this is a Center for Applied Rationality (aCFAR).

In its past, CFAR was one of the major co-creators of the Bay Area rationalist community, and the rationalists and AI safety movements broadly – people would come, get pulled into some sort of magic near our classes, and in some cases move to the Bay (or somewhere else) to work at MIRI or co-found FLI or do other neat stuff. (We had ~1900[5] guests attend a 4.5-day or longer program of some sort). CFAR also caused concepts like “double crux,” “TAPs,” and “inner simulator” to be spread across rationalist and EA spaces. We hope to gradually do something similar with a new set of concepts.[6]

Today, CFAR is a vehicle for running workshops that I and the rest of our current staff deem worthy, which are an amalgam of classic CFAR stuff (descended from Eliezer’s Sequences) plus some newer stuff aimed at “honoring “who-ness.” It’s also an experiment, as discussed throughout the post.

If you’re up for introducing yourself (which I’d appreciate!) there are two good ways:

  • You can say a bit about yourself and what brought you to the conversation in the introductions subthread

  • Or, you can write some object-level comment and add a sentence or two about where you’re coming from

On to the post proper!

Workshops

Workshops have always been the heart of our work at aCFAR. We spend most of our staff time tinkering toward making the workshop good, staring at folks at the workshop to see if it is good, iterating, etc. It’s where our take on rationality comes to life, changes us, is changed by its encounters with some of you guys, and so on.

So – if you want to kibitz on our current generators – it may be where you can best see them.

For those just meeting us: A CFAR workshop is a 4.5 day retreat with about 25 varied guests, 12-ish staff and volunteers, and a bunch of hard work, rationality, and conversation. The workshop typically involves a bunch of classes on rationality techniques and lots of time to apply those techniques and work on solving actual problems in real life. We currently have our next workshop scheduled for January 21-25 in Austin, TX.

Workshops: favorite bits

Among my favorite positive indicators from our workshops:

1. People made friends at the workshops and in the alumni network.

Many workshop guests across our history have told me a CFAR workshop was the first time they’d managed to make friends in the decade-or-more since they finished college.

This wasn’t an accidental side-effect of the workshops; we tuned the workshops toward: (a) creating contexts where people could update deeply (which helps with making real friends) and (b) arranging small and interactive classes with pair work, providing a “names and faces” Anki deck, hosting lightning talks, etc. to make it easy to make new friends at the workshop.

This wasn’t a side-goal for us, separate from the main aim of “rationality training”; IMO there’s a deep connection between [conversations and friendships, of the sort that can make a person bigger, and can change them] and the actual gold near “rationality,” such that each of (true friendships, rationality) can activate the other.

2. People had conversations at the workshops that updated the real generators of their actions.

Many conversations in the default world involve people explaining why a reasonable person might believe or do as they are doing, without sharing (or often knowing) the causes of their choices. But at CFAR, the real causes of actions often were (and are) properly in the conversation.

Relatedly, people at workshops would become at least briefly able to consider changing things they’d taken for granted, such as career paths, ways of relating to other people, etc., and they’d do it in a context full of curiosity, where there was room for many different thoughts.

3. The workshop was visibly “alive” in that it felt organic, filled with zany details, etc.

If this CFAR is going well, we should have spare energy and perceptiveness and caring with which to make many side-details awesome. We did this well in the past; we seem to be doing it even better now.

For example, during Questing at our November workshop, we had CFAR instructors run short “interludes” during which people can breathe and reflect a moment in between 10-minute hero-and-sidekick problem-solving blocks. However, due to a minor scheduling mishap, CFAR instructor Preston ended up committed to be in two places at once. Preston solved his problem by setting up an “oracle” to run his section of inner simulator-inspired Questing interludes.

For another example, chef Jirasek created waves of life emanating from the kitchen in the form of music, food art, and sort of ostentatious interactions with the locals (e.g. replacing his whole wardrobe with stuff from some local thrift stores).

4. Truth-seeking, curiosity-eliciting, rationality-friendly context

The context at our workshops is friendly both to hearing peoples’ perspectives deeply and to being able to point out possibly-contrary evidence.

Workshops: iffy bits, and their current state

Although there’s much I love about our old workshops, I would not be able to run them now, although I could probably cheer for someone else doing it; there’s too much I was eventually unable to stomach for myself. In particular:

Power over /​ doing something “to” people (current status: looks solved)

I currently aim not to take pains to impact someone unless I can take equal pains to hear them (in the sense of letting them change me, in deep and unpredicted ways). This is part of a general precept that conscious processes (such as CFAR guests) should not be subservient to processes that can’t see them (such as a rock with “follow policy X” written on it, or a CFAR instructor who hasn’t much attention to spare for the guest’s observations).

My main complaint about our past workshops is that I, and much of ‘we’, did not always hit this standard (although we tried some, and some of our staff did hit it). It’s part of my current take on how to do epistemics in groups.

More details about this complaint of mine, for those interested:

1. Excessively narrow backchaining /​ insufficient interest in both the world, and our workshop guests
I was scared about AI risk, all the time. I was in an emergency. And while I did try at the workshops to drop all that for a bit and take an interest in the people in front of me, I was also at the workshops to “make progress” on the AI risk stuff.

So, my notion of which participants were the coolest (most worth paying attention to, inviting back, etc) was mostly:

  • Who might do good work re: AI safety (math/​CS chops, plus thinking in MIRI-ish ways), plus

  • Who was likely to donate to us or an EA organization, or organize parts of the alumni community, or visibly spread our rationality culture, or otherwise backchain in ways that would already seem sane to inner circle rationalists /​ AI safety people

(As opposed to, say, who had neat make-stuff skills or research patterns we didn’t have, that might broaden our horizons; I was too tired to really see or care about such.)

2. Nudging the CFAR alumni culture toward #1, so our community also became narrower
I, and other CFAR staff, weren’t the only ones who evaluated coolness a bit too narrowly, by my present taste. I think I and others in positions of community leadership often helped set this up in various ways.

(As a contrast point, the 2007-2011 OvercomingBias commenter and meetup community had broad and deep engagement without being a “school of thought” in the way the CFAR and LW rationalists later were, IMO.)

3. Trying to do something “to” our guests; priming our guests to want something done to them.
Many of our guests signed up for the workshop so that we could help make them more rational so that they could be better EAs (for example). And we wanted them there for much the same reason (sometimes; some of us).

4. Casting ourselves as having more epistemic authority or charisma than I currently think warranted.
Deeply related to #1, 2, and 3 above.

I’m relieved that our Nov 2025 workshop (and our prior, tiny pilot at Arbor Summer Camp) did not have these problems AFAICT. Things I saw in November, that I believe I’d see differently if we did still have these problems:

  • I felt relaxed around the participants; my fellow instructor Jack Carroll said they liked the participants for the first time instead of feeling at war; many or all of us instructors simply enjoyed reading the exit surveys instead of feeling jostled by them

  • We heard considerably more remarks than usual along the lines of “gosh, rationalists are really friendly when they get together in person”

  • On Day 4 of the four day workshop, we spent three and a half hours on an activity called Questing, in which participants took turns being the “hero” (who worked on whatever they liked) and the “sidekick” (who assisted at the hero’s direction) for ~10 minute chunks. This activity was extremely well-liked (did best of all activities on our survey; many said many great things about it). In the past, similar activities led to many participants feeling jarred/​jostled/​attacked/​hurried; this time, despite the schedule, it felt spacious and friendly

This is enormously relieving to me; uncertainty about whether we could change this thing was my main reason for hesitating to run CFAR workshops. We will of course still be keeping our eyes out.

More workshop iffy bits

While the “power over” thing was the iffy bit that bugged me the most, there are also other things we want or need to change about the workshop. You can see our whole workshop-related bugs-and-puzzles-and-todos list here.

More about the new workshop

If you’ve been to a CFAR workshop in the ~2015-2020 era, you should expect that current ones:

  • Have roughly 2/​3rds classic content, including Building a Bugs List, TAPs, Inner Sim, and almost all the more memorable classes

  • Are the same format

  • Have roughly 1/​3rd new content, mostly aimed at practical ways to be less “seeing like a state” when applying rationality techniques, and to be more “a proud gardener of the living processes inside you /​ a free person with increasing powers of authorship.” (We’ve been calling this thread “honoring who-ness.”)

Further detail, if you want it, at More details on CFAR’s new workshops.

Larger contexts surrounding our workshops

In this section, I’d like to talk about the larger contexts (in people, or in time) that our workshops depend on and contribute to, as well as some solved and unsolved pieces about those larger contexts.

aCFAR’s instructors and curriculum developers

Our major change, here, is that all instructors and curriculum developers are now very part-time. (In 2012-2020, most workshop instruction and curriculum development work was done by full-time staff.)

There are two big reasons I made this change.

  • First, I’m pretty sure it’s healthier for the instructors (in the 2013-2020 era, many CFAR instructors had very hard times, in ways that reminded some of us of the troubles of traveling bands)[7]

  • Second, it makes it easier for CFAR to be unafraid near questions of whether we should change something major about what we’re doing, should shut down, etc. – since our staff mostly don’t have their only avenues for meaning (or for income and life stability) bound up with CFAR

A pleasant bonus is that we get more mileage per donor dollar: a few hours/​week of trying our units on volunteers and on each other is enough to keep CFAR in our shower thoughts as we go through the week (for me, and for many other instructors AFAICT), and the rest of our normal life seems then to give us useful insights too. (And we’re paid hourly, so a “lighter” schedule that still gets curriculum development flowing is a good deal for donors!)

aCFAR’s alumni community

Our workshop development process is stronger with a healthy alumni community in several ways:

  1. An alumni community lets us better see the long-term impact of our workshops

  2. An alumni community lets workshop alums learn and add to the art more thoroughly by practicing with others (As well as hopefully allowing cool new business collaborations, friendships, etc.)

  3. It seems more wholesome to tend (and be tended by) a community of alums, vs having only one-off interactions with new workshop guests

Our alumni community was extremely fun and generative in CFAR’s early years, but gradually became less invested and lower trust over time, partly as a natural side-effect of passing years, and partly because we weren’t doing community all that well. We still have an alumni mailing list and it hosts some interesting discussions, but things there feel less active and exciting than they once were.

We like our alumni and think they’re cool! We’d like to figure out how to freshly kindle some of the energy that made the old CFAR alumni community as cool a place as it was.

My guess (not a promise) is that we should start a new alumni community with these features:

  • Old alumni are not automatically in, but you are encouraged to reach out if you’re an old alum and want to join the new community

  • When a person comes to a workshop, they automatically become a member of the “new alumni community” for a fixed period of time (a year? two years?), after which their membership automatically expires unless they contribute in some way (e.g. volunteering at a workshop; donating /​ paying a membership fee; or making something neat for other alumni)

  • There are annual alumni reunions, a mailing list or other structure for discussions, and some smaller, lower-cost “CFAR alumni workshops” on specialized topics

Lineage-crediting and gatekeeping

It is vital to accurately, publicly track where good things come from (lineage-crediting). At the same time, it is necessary not to let people into our events or alumni networks who we can’t deal with having there. This combination can be awkward.

As an example of this awkwardness: Michael Vassar taught me and many people a bunch about rationality when I joined the rationalist and AI safety scene in 2008, and he was also quite involved in me changing my mind about the stuff I mentioned changing my mind about in 2020. I can see traces of his ideas all over this post. My thoughts in this post, and the ideas in the newer parts of CFAR, were also greatly influenced by my good friends Anonymous and Anonymous.

And yet, for varied reasons, I wouldn’t feel good about having any of those three visit an intro CFAR workshop (although I might well invite Michael Vassar to an alumni reunion or similar event, where my tolerances are a bit broader; and I’d gladly have all three to a retreat run by a more bespoke CFAR spin-off called LARC/​Bramble). I think this is not unusual bad luck; my best guess is many of those who “woke up” as kids in strange surroundings and who forged their own paths to being unusually conscious and agentic, dodged some of the “be rule-abiding” training that makes most middle class people easy for other middle class people to predict and be safe around. And the CFAR alumni network is a large, semi-institutional context designed to work okay for folks who are within the normal range on rule-abiding and who are used to getting to assume others are too, for good reason. (To be clear, I also learned a pile of rationality from many others, most notably Eliezer, who are reliably rule-abiding.)

This sort of “awkward” isn’t only costly because of wanting not to alienate my friends. It’s also costly because it’s confusing (to me, to them, and to workshop guests and onlookers). When rationality content is presented within a context that couldn’t have made that content and that doesn’t help tend the sources of that content, it’s harder to set up good feedback loops. (Cf. the Caring that Tends its own Sources).

But, here I am, anyhow, having decided that this is the best world I can manage, and trying to describe something of its workings in public.

My plan, roughly, is the obvious one:

  • Try to acknowledge the lineages of ideas whenever it comes up, without regard to whether it’s awkward

  • Don’t admit people to CFAR workshops or events who we can’t deal with (or try not to; but be medium in my false-positive/​false-negative tradeoff ratio)

  • Do value: visibly staying in touch with thinkers I’m relevantly downstream of; coming into contact with varied high-capacity people; trying to MacGyver decent feedback loops where I can

Michael “Valentine” Smith

While we are on the topic of both gatekeeping and lineage-tracking: we are considering bringing CFAR co-founder Michael “Valentine” Smith back onto our workshop staff.

I’d like to note this publicly now, because:

  1. We seven years ago said publicly that Valentine “[would] not be any staff or volunteer roles going forward, but remain[ed] a welcome member of the alumni community”, and so it seems well to be similarly public about my revised intent

  2. A fundraiser post seems like an honorable place to publicly share plans and policies that some may object to, because folks can easily not-donate (or advocate that others not-donate) if they want.

If it matters, I and various others have worked closely with Valentine at LARC/​Bramble (CFAR’s more bespoke spinoff organization) for the last two years, and I have found it comfortable, wholesome, and generative.[8]

The broader rationality community

The broader rationality community makes our work at aCFAR feasible (e.g. via donations, via sending us participants who are already rationality fans, via giving us good rationality stuff to draw on, and via good critiques). We are grateful to you guys. It’s important to me that we give back to you, somehow, in the long run. My main current theory as to how to give back is that we should write substantive blog posts as our theories-of-rationality congeal, and should make our process open so if we fail this time, it’ll be easier for interested parties to see what exactly went wrong (no opaque fizzles).

Flows of money, and what financial viability looks like within our new ethos

We do not yet have a demonstrated-to-work plan under which aCFAR (in our new incarnation) can be financially sustainable.

In 2012-2020, a large majority of our donations came from AI risk donors, who donated because CFAR recruited for MIRI (or to a lesser extent other AI safety efforts) or because they otherwise believed we would help with AI risk.

Also, in 2012-2020, a significant chunk of our workshop revenue came from EAs (both AI risk people and EAs more broadly) who had heard that CFAR workshops would somehow make them better EAs, and perhaps also that CFAR itself was an EA organization worth supporting. And so they balked less at the (then) $3.9k price tag because it was parsed as an EA expense.

Double also, in 2012-2020, we workshop instructors broadly tried to position ourselves as people who know things and can give that knowledge to you (and so are worth paying for those things).

My current attempt at CFAR branding lets go of all three of these angles on “you should give us money,” in favor of an ethos more like: “we (including you, dear workshop guest) are a community of people who love to geek out (in a hands-on way) about a common set of questions, such as:

  • What things are most worth our attention?

  • What processes might help us form true beliefs about the things that matter the most?

  • What processes in fact lead to good things in the world, and how can we tell, and does it work if we mimic them?

  • What is known by different sets of “makers” in the world, e.g. by the people who keep the medical system running, or who do academic chemistry research, or who make movies, or who do handyman work? How can you tell?

  • Are there common illusions getting in our way, e.g. from Kahneman-style biases, or from memetics or social ties, or from ego? What patterns might help us compensate?

  • Where do our goals come from?”

Under this model, CFAR instructors differ from workshop guests in that we spent a bunch of time testing and refining particular classes (which we try to make into good springboards for doing hands-on geeking out of this sort in common, and so for jumpstarting guests’ ability to have rich conversations with each other, and to do rich, grounded noticing together, and to point out traction-creating things that are visibly true once pointed-to). But we try not to differ in perceived/​requested epistemic status, or in “you should believe us”-flavored social cues.

Also, under the new model, our requests aren’t backed by a claimed long-run EA payoff: we are not saying “please consider sacrificing parts of your well-being to work at CFAR, or to attend CFAR or implement our taught habits, because it’ll help with AI risk somehow.” Instead we are saying “please come nearby if it interests you. And if you like what happens next, and what changes it seems to give you in the observable near- and medium-term, then maybe keep trying things with us for as long as this seems actually healthy /​ rewarding /​ to give good fruits to you and visible others in a simple, cards-on-the-table way.”

I expect our new model is more wholesome – I expect it’ll bring healthier feedback loops to our curriculum and culture, will form a healthier town square that is more fruitful and has fewer stuck beliefs and forcefully propagated illusions, and will be an easier context in which to keep us staff wanting to share most info in public, including evidence we’re wrong. But I don’t know if it’ll bring in enough revenue to keep us viable or not. (And we do still need money to be viable, because being a custodian of such a community requires staff time and money for food/​lodging/​staff flights/​etc.)

If we can’t make a financial go of things under our new ethos, my plan is not to revert to our past ethos, it’s to fold – though my guess is we’ll make it.[9]

How our ethos fits together

In this section, you’ll find pieces of what motivates us and principles we intend to follow.

Is aCFAR aimed at getting AI not to kill everyone? If not, why are you (Anna) working on it?

We are not backchained from “help get the world into state X which’ll be better for AI,” nor from “help recruit people to AI safety work,” “help persuade people to take better AI policy actions,” or anything like that.

My (Anna’s) motivations do and don’t relate to AI safety; it’s complicated; I’ll publish a separate post going into detail here in about a day.

Principles

This is an attempt to make visible the principles that I, and to some extent CFAR, are trying to act on in our CFAR work. I, and we, might change our mind about these – these aren’t a promise – but I plan to review these every three months and to note publicly if I change my mind about any (and to note publicly if CFAR changes leadership to someone who may run on different principles).

We’ll start with some short-to-explain ones, then head into some long explanations that really should be their own blog posts.

Truth is crucial

This principle is one of the “things that go without saying” around LessWrong most of the time (and is shared with past-CFAR), but it’s precious.

Honor who-ness

Remember each person is a miracle, is way larger than our map of them, and is sustained by knowledge and patterns of their own making. Honor this. Allow ourselves to be changed deeply by the knowledge, patterns, character, etc. of anyone who we deeply change.

Stay able to pivot or shut down, without leaving anybody in the lurch

It’s easier to stand by principles if there’s a known and not-too-painful-or-commitment-breaking path for quitting within a few months (should we prove unable to stick by these principles while remaining solvent, say).

Serious conversation, done in hearty faith

This section is written by my colleague John Salvatier.

Serious conversations deal with the real issues at play and go beyond literary genre patterns. And serious conversations in hearty faith apply enough real human trying to get to real discovery about the topic.

Serious discussion of problems we really care about where the participants are fully engaged are kind of a miracle. For example, if you’re wondering whether to quit your job, a serious and hearty conversation about the question and about what matters to you in life can have a profound life effect.

At this CFAR, we are trying to have hearty faith with each other and with others to create the possibility of serious conversations. (And we are trying to do this without forcing, via repeatedly asking ourselves something like: “does it feel good to share my real cruxes right now, and to hear where [person] is coming from? If not, what sensible reasons might I have for not (bearing in mind that there’s lots of useful stuff in me that conscious-me didn’t build)?” We aren’t trying to impose hearty faith; we’re taking its presence as a thermometer of whether life is going well right here.)

Serious conversations are like science experiments. Their success is not measured on reaching a particular outcome, but on their revealing substantial things about the world that bring us into closer contact with the world.

The classic Eliezer/​Robin AI Foom Debate is a good example of something that might look like a serious conversation but somehow isn’t a “conversation” in quite the sense we mean. A conversation would spend a bunch of time doing asymmetric things where one person is mainly trying to understand the other (for example passing their ITT). Instead, Eliezer and Robin each use each other as a foil to better articulate their own view. This might be serious research, or good exposition to an audience, but it isn’t the thing we have in mind.

Hearty faith is necessary for successful serious conversations when our maps (or theirs) have messy relevance to the world and our goals. Which they will when the topic is a life frontier or a world frontier.

Hearty faith is different than just good faith.

Bad faith is lying, fraud. An abandoning of our integrity.

Lousy faith however is when our intentions are like a thin stew instead of a hearty, many-flavored, full-bodied one. In “lousy faith” we are putting in effort to keep integrity on some dimensions, but not very many.

  • My cutest example of “lousy faith” is a teacher who replies to a kid’s “can I go to the bathroom?” with “I don’t know, can you?”

  • A subtler example is someone who engages with what you say, but takes a narrow and incurious view of where you’re coming from and what you mean by your words, adversarially playing dumb about what you’re saying. They’re not lying about trying to understand, but they’re certainly not applying themselves or being up front about their (lack of) investment.

  • Another paradigmatic example: “Why don’t you just [radically shift your mindset to mine]?” said as if that were an atomic action.

Hearty faith, by contrast, is when we act with attention to many sorts of integrity all at once (the more, the heartier, like a hearty stew).

Hearty faith is necessary for serious conversations with messy world maps to be successful because every such conversation has many relevant-but-illegible layers that are otherwise obscured and hearty faith allows legibilizing them. It allows the relevant-but-illegible conversation layers into the conversation on good terms.

The caring that tends its own sources

This is a phrase I made up, inspired by Eliezer’s The Lens that Sees its Own Flaws (which is one of my very favorite Eliezer posts, and conveys an idea that’s on my shortlist for “most inspiring insights ever”) and and also by conversations with my friends Evan McMullen and Anonymous.

I hope to eventually write a blog post about this principle that makes sense. But this is not that blog post, it is a placeholder.

So: we find ourselves alive, awake, caring. How did I, or you, reader, get to be like this? It’s a bit of a miracle. We can tell decent causal stories (mine involves my parents, their parents, the United States, a brief opening in Hungary’s border during a war, my mom’s careful crafting of endless ‘math games’ for me, my dad’s absorbing a useful secularism from the Soviet Union that he rightly hated… going further back we have the European Enlightenment, eons of biological evolution, and more). We can tell decent causal stories, and it’s worth bothering to tell them, and bothering to try to get it right; and at the end of the day “a miracle” is still a decent term for it – the processes that let us be here are something large, and worth marveling at, and contain deep generative “magic” that we don’t yet know how to build.

How to relate to this?

Concretely:

  • I’ll find desires within me that are busy doing a flailing pattern that won’t get anywhere – pieces of caring that are not yet “helping tend their own sources.” (For example, I’ll be reflexively “not-listening-harder” to try to make a loved one act differently.) In such cases, I try to gradually help the reflexive desire become able to care usefully across slightly-longer time horizons, in collaboration with “me as a whole.” (Then, the “caring that tends its own sources” can be bigger.)

  • I try to trace lineages aloud, even where it’s awkward

  • When I see someone who seems surprisingly (skilled /​ generative /​ agenty /​ etc), I try to ask what process made them

  • I make some effort to help tend the processes that made me, for myself and for CFAR. (E.g., while this CFAR is not an EA organization, we’ve been helped by EA and I hope we can leave it better than we found it.)

No large costs without a feedback loop grounded in earned knowledge and caring

This principle is an attempt to articulate the main thing I changed my mind about in 2020.

It now seems to me that when you’re running an organization, such as aCFAR or the neighborhood bakery, you’ll benefit if you:

  • Are aware of the resources you depend on. (As a bakery you might depend on customers, ingredient suppliers, a thriving downtown that helps bring potential customers by your door, the cultural tradition of coffee and baked goods...)

  • Take an interest in what produces and sustains these resources. Be aware of the rough extent to which you do or don’t have reliable maps of what’s involved in producing and sustaining these sources, so you can maintain the needed amount of [respect /​ Chesterton’s fence /​ actively watching out for needed conditions you shouldn’t disrupt], without being unduly cautious about everything.

    For example, I understand how to turn hot water and peppermint teabags into peppermint tea. (Thus, I can change up my water heating method, its temperature, etc without being surprised by the results.)

    On the other hand, my friend sometimes likes to walk his dog with me. I’m pretty sure there’s detail to where he will/​won’t take his dog, when he does/​doesn’t feel like doing it, etc., and I’m pretty sure that detail helps maintain cool functionality, but I also know I don’t know how it all works. Thus, I know that if I try making many of these decisions for my friend, without consulting him, I might mess up some resource he’s used to counting on.

  • Take an interest in the specific “bridging structures” that let particular resources coexist.

    For example, a coaster is a good “bridging structure” to keep my hot teacup from damaging my wooden table.

    For a more complex structure, a bakery’s proprietor might be careful to keep their sidewalk shoveled, to greet neighboring business owners, etc. as part of a plan to allow the bakery and the downtown it’s in to avoid harming each other. This kind of bridging structure will need to regularly take in new info, since one probably can’t have an adequate static map of downtown as a whole.

  • Let each resource-flow and each bridging structure have a keeper who maintains both an inside view about what’s necessary for sustaining the resource flow and an inside view about how much “magic” isn’t yet in their map.

    That keeper must be responsible for deploying these resources only in ways that make inside-view sense to them (e.g., if there’s a small experiment, the keeper should have felt hope in doing small experiments; if there’s a large deployment, the keeper should have felt conviction that large deployments of this sort bring fruit)

    That keeper must also have enough eyes on the results of that deployment that they can update sensibly.

I’ll spell out what this means in the case of CFAR, and then explain why I care.

What this means in the case of aCFAR:

This CFAR makes use of three main resource flows:

  • Staff and volunteer time and energy

  • Participant desire to come to workshops and test sessions, engage with our attempted rationality techniques, do life a bit differently in contact with us, and let us see something of the results

  • Money (from donors, workshop revenue, and other groups renting our venue)

We want all these resources used in ways where their keepers have grounded reason to think it’ll help with something they care about (and have feedback loops for checking).

Concretely, I’m aiming for:

Staff and volunteers have better lives (or not-worse lives) via our involvement with CFAR, including in the short- and medium-run

In CFAR of 2012-2020, many of us sacrificed for CFAR – we e.g. worked 60+ hrs/​week, had distorted social patterns with folks in the rationality community, and otherwise paid (and sometimes caused) large costs. I’d like to arrange our culture so that people don’t do that this time around. I want us to each be simply, groundedly in favor of what we’re doing, without trusting in long-term unseen effects on the post-AGI future or anything else.

(Here and elsewhere, it’s fine if staff and volunteers sometimes try things that hurt us. The principle isn’t “no costs” or “no one made worse-off ever.” It’s rather “no key resource flows, ones that CFAR is reinforced by and grows around, that make people worse-off.” One-off “ouches” are part of how we locate what works, and are fine as long as we update away from them instead of learning to depend on them.)

Participants try aCFAR’s suggested habits based on their own inside views (not our charisma or claimed knowledge)

Some participants have historically shown up to the workshop expecting to be told what to do by people who know the answer. But I want us to resist this pressure, and to create a culture of “practice trusting your own judgment, and making many small experiments while seeing yourself as the author and experiment-iterator for your life and habits.”

Donors

I go into much more detail on this one in who I hope does and doesn’t consider donating.

Why this principle

I’m afraid that otherwise we’ll do a bunch of hard work, at large costs, that nets out to “harmful, on average, after considering opportunity costs.” I’m also afraid that all that work won’t even teach us much because, for most of it, there was no conscious human who individually thought it a good idea. (This is coming out of my 2012-2020 experiences.)

To spell out my thinking:

First: people often learn more by making their own mistakes than by “making other peoples’ mistakes.”

This is easiest to see if we consider a concrete context such as chess. If I play chess from my own inside view, I will repeatedly make moves that look like good ideas to me – and then my opponent will often show me how exactly my inside view was wrong by exploiting my errors. If I instead play chess by repeatedly trying moves my friend thinks are good, I’m likely to learn less, because my friend’s moves aren’t rooted in a detailed inside-view lodged in my head.

There are exceptions – maybe my friend has a Cool Chess Trick that I can understand once I try it, and that wouldn’t have occurred to me on my own – but these exceptions work when they’re somehow supporting an existing, intact flow of my own autonomous choice.

Second: I don’t want to build habits or culture (in our alumni) that’ll be easy for cult leaders or others to exploit.

If workshop guests practice deferring to us about what weird things to do with their minds – especially if they do so for extended periods, based on wispy claims about long-term payoffs, e.g. “this’ll help with AI risk somehow” – this risks setting some up to later try deferring to people running more obviously unhealthy cults. I speak from experience.

I also hope a culture of “remember the buck stops with you; check whether it is producing fruits you directly feel good about” may help with the rationalist community’s tendency to enable AI companies. But this is only a hope.

Third: I want good hygiene near CFAR and the rationalists /​ I don’t want to leave metaphorical rotting meat in our kitchen counter.

If you’ll pardon a metaphor: having living, healthy humans in a kitchen is mostly fine, hygiene-wise. Having a large slab of unrefrigerated meat sitting in the kitchen (no longer alive, and so no longer tied in with a living organism’s immune system), is a hygiene problem, especially after a while.

I suspect that if we have “living resource flows” across CFAR, the memes and habits and culture-bits that survive and spread here will mostly be good for us and others. I suspect by contrast that if we have ungrounded resource flows (ie, if we ignore this principle), we’ll risk breeding “parasitic memes” (or people) that are optimized to use up all the free energy in the system and that don’t tend to the conditions required for healthy life.

I mean it

If we can’t hit this principle (or the truer spirit behind it), my plan is to either figure out how to to hit it, or close CFAR.

(Although, here as elsewhere, I may revise my views; and I’ll update this post if I do; these principles are not permanent promises.)

Some principles you might assume we have that we don’t have:

  • Safety/​vetting/​”full protection” as a maximum priority. We care about safe experiences and environments, but not to the exclusion of all else.

  • Maximum data-backedness (we like data, but most of our stuff hasn’t been verified by RCTs, and we also believe in acting on our intuitions and inside views and in helping you act on yours)

  • Trying to be “The” canonical Rationality Center, or to do everything the one objectively best way. (In fact, we are aware that we are one project in a world with many cool projects and much space. We aim to do our thing without hogging the whole “rationality” namespace, or the whole space for rationality-related cultural experiments.)

  • I’m not sure what else goes here, but I welcome questions.

Why we need your support /​ some cruxes for continuing this CFAR

There’s a sense in which we don’t need anybody. I could sit in my room, call myself an “applied rationality researcher,” and write things I called “rationality exercises” on paper or something.

But if we’re going to do something that’s not pretend, then we need people. And we need to find a way that there’s something in it for those people – a resource flow that gives back to them. (Otherwise, it’s still pretend.)

Why ask for donations?

We’re asking for donations because it takes money to run CFAR. If there are enthusiastic people out there who are willing and able to help fund us, that’ll both help a lot and seem wholesome. We aim to find a set of people who want the kind of practice we are building, and who want to build it, believe in its possibility, and try it together.

If nobody donates, we’ll likely continue; in extremity, we could e.g. sell our Bodega Bay venue, which would give us a few years’ operating expenses at our current, fairly minimalist budget. (That said, we love our venue and don’t want to sell it; more on that later.)

But if nobody donates and nobody cool wants to kibitz and all the people who try our workshop kinda want their time back and so on, of course we quit. Our main business in interacting with the community is to find a way to do cool stuff, via resources from some of you, in such a way that everyone’s glad. I suspect, but am not sure, that getting some donations from some of you is part of how to build the good, living center we are seeking.

Some disagree with us, and we’re doing this anyway

It is not the case that everyone who’s had much contact with past-CFAR believes resuming workshops is a good idea.

In particular:

  1. In the comments thread of our last post, Duncan Sabien (who worked for CFAR from 2015 to 2019, served for a long time as our Curriculum Director, and, among other things, wrote the CFAR handbook), spoke against CFAR in strong terms.

  2. I also got several quieter responses along the lines of “hmm, really? I’m not sure if that’s a good idea” when I told long-term friends and former colleagues I planned to restart CFAR. Also, I have myself shared concerns about my and CFAR’s past work, since changing my mind about some things in ~2020.

There were also cheers: a sizable majority (at least of those I heard from) offered enthusiasm, well-wishes, “I’m glad there are again CFAR workshops where I can send my friends,” “I missed you guys,” etc. Former CFAR instructors Renshin (aka Lauren Lee) and Adam Scholl did this in the public comment thread. And I of course landed solidly at “yes, I want this enough that I’m willing to put in real effort.”

But I want to acknowledge that some disagree, for a few reasons:

  1. It’s more honest to potential donors;

  2. I’d like those with serious doubts (including folks who might normally be shy, quiet, or agreeable) to have a way to mention these without disrupting a conversation that assumes they don’t exist;

  3. I want to show off aCFAR’s new ability to put coordinated effort into a thing some disagree with

Let me elaborate on (c): Back in 2014-2020, I would freak out whenever some serious thread of public conversation cast doubts on CFAR. I’d do this because I knew I needed CFAR staff’s morale, and I believed (accurately, I think) that many would lose their morale if even a small vocal minority said we were doing it wrong.

I believe our morale is somehow stabler now. (Perhaps partly because we factored aCFAR’s believing in’s out separately from our epistemics, and also because we’re a particular experiment we each want to do rather than a claim about the ‘objective best’).

I care about (c) for several reasons, but one is that I want good but imperfect institutions to exist in our present world, and to do this without suppressing news of their failures. Many of the previous decades’ institutions are gone from the world of 2025.[10] I think this is in significant part caused by the combination of:

  1. the Internet making it harder to suppress evidence of errors/​doubts/​harms/​etc. (a good thing)

  2. a heuristic of “if anyone seriously objects in public, either pressure them into shutting up, or drop the project” (unfortunate, IMO).

Also, I put real effort into dismantling parts of my and CFAR’s positive reputation that I believed were false or ill-founded, and I did that partly because I didn’t think we could build something good near CFAR before that stuff was dismantled. Having completed that step (as I see it), I am eager to see what we can build on the new, partially razed ground.

Donations

Our finances

We currently have about $129k available for CFAR and its projects, which gives us about four months of runway.

To make it comfortably to the end of 2026, we think we need about $200k of additional donations (counting donations into this fundraiser, any SFF funding, and any other donations, but not counting workshop payments or venue rental revenue). We expect to probably get some money from SFF (probably in the form of matching funds, in about a week), and so are setting a “basic target” of $125k, and a “reach target” of $200k (as we can do more with more).

For more detail on that, see this breakdown:

General Costs

CFAR has ongoing general administrative costs – accounting, staff wages for administrative tasks, and so on. We think this will cost us about $72,000 for 2026. This is a very significant decrease from e.g. 2019, as CFAR is running with a smaller and leaner staff and no longer maintains office space.

Venue

We maintain an event venue in Bodega Bay, California, which we also rent out to other groups. This venue is both one of our primary expenses and also a source of revenue. Since 2020, the venue has been a significant net expense as we have run fewer programs there and not had many bookings. However, we now have venue caretakers who are sprucing the place up, figuring out what outside groups are looking for in a venue and how we can hit it, etc. We also expect to use our venue for more CFAR programs than we have been in the past few years.

For 2026, we estimate that we will likely have total venue costs of about $285,000. This is primarily mortgage payments, utilities, various maintenance/​repair/​”venue caretaking” work, and property taxes, although it also includes supplies for programs held at the venue. We also anticipate bringing in approximately $200,000 of revenue from outside bookings (after deducting cleaning fees), as well as using the venue for our own programs, hosting some staff meetings there, and so on. The savings from our own programs there are difficult to calculate but would likely be in the tens of thousands of dollars, perhaps $35,000 to $65,000 or so across 2026.

This means we anticipate the venue will on net cost us something like $20,000 to $50,000 for 2026, depending on how many programs we end up running there, how many outside workshops we hold, and what other costs we may incur. This is not ideal but we consider it a cost worth bearing for now, and in the long run we hope to run more programs there ourselves and bring in more outside bookings such that the venue ends up breaking even or being financially positive for CFAR.[11]

Workshops

Workshops are both a source of revenue and a significant cost for CFAR to run. Generally speaking, workshops gain or lose money based on how many staff members and participants are involved and how much financial aid we do or don’t offer to participants; a workshop with twenty-five participants paying full price would be profitable, while workshops with fewer participants and/​or more financial aid may well lose money for CFAR on net. For instance, our November workshop ended up approximately -$28,400 on net.

In 2026, we currently anticipate running about four mainline workshops (one Jan 21-25 in Austin, TX and three yet to be announced). The workshop in Austin will incur venue costs that workshops held at our venue won’t. Insofar as the workshops otherwise have overall similar costs and revenues as the November workshop, we will probably be net minus ~$130,600 from workshops.[12]

We are excited to run these workshops even at a potential loss. In addition to being helpful to the participants, running workshops greatly aids our efforts to develop and refine an art of rationality. (In the long run, if our programs are any good, we should be able to fund the workshops more fully from those who attend, which will be better feedback loops, though we may want ongoing exceptions for students /​ folks without much money and for folks who are coming mostly to aid rationality development work.)

We also think that workshops benefit people beyond those who attend directly – some workshop attendees have gone on to teach others concepts like double crux and other CFAR techniques, and we think running workshops provides significant value for these “grandstudents”[13] as well.

In the past, CFAR has even offered some workshops for free – for instance, the four workshops we ran in the Czech Republic during autumn 2022 were entirely free to participants. However, the overall state of the funding environment was different when those programs were being planned and offering free mainline workshops currently seems imprudent.

Curriculum Development

In addition to the above costs, we also pay staff for general curriculum development outside of workshops – research into various aspects of rationality, work on new techniques, running test sessions where we try new material on volunteers, and so on. We project something like $25,000 in costs for this area in 2026, though this is somewhat speculative.

Aspirational

In addition to the core categories mentioned earlier, there are additionally various other projects that CFAR would like to be able to spend money on but currently is not.

For instance, in the past CFAR has supported “telos projects” – a program where CFAR provided funding for rationality-related projects that felt relevantly alive to people. In 2025, we had a few legacy projects in this area but are not soliciting new applications for telos funding; in a world where we had better funding we would like to reopen the program and use it to help new alumni run cool projects, including infrastructure for the new alumni community.

We would like to be able to pay me (Anna) to write various LessWrong posts about concepts CFAR has recently been working with, but are currently holding off on that. We would also like to (slowly, slightly) grow out our staff of curriculum developers and to modestly increase staff wages if we can.

Who I hope does, and doesn’t, consider donating

As mentioned earlier in this post, I’d like to build toward a world in which aCFAR’s donations come from, and with, the right kind of feedback loops.

I’m particularly cheerful (happy, relieved, joyful, grateful) about donations stemming from any of:

  1. You want to say a friendly “hello” in a donation-shaped way. Sending us $20, or $200 if you are so minded, is a good way to let us know, “Hi aCFAR, I see you, I smile at you, I hope you stick around.”

  2. CFAR, or things relevantly similar to CFAR, made you much better off personally, and you’d like to “pay it forward.” (I donated to Lightcone this year because their existence makes my life much better; if you have a similar desire re: this CFAR, we appreciate it!)

  3. You expect to feel more at home in the CFAR context, in some important way, and so you’d like to enable the creation of that context, and/​or to buy into it or nudge it a bit toward being you-flavored in some way.[14]

  4. There’s something in here that you personally are rooting for, and you’re moved to root for it harder, with your dollars, so it can really be tried. (Like a home team or a city or a project in which you have pride and have/​want membership)

    The more dollars you deploy here, the more I hope you have some heart to spare to come along with your dollars, as “I care about this, and I’ll be kibitzing from the sidelines, and updating my total view of life based on how it goes, with enough context that my kibitzes and updates will make sense.” (The more of your dollars you deploy here, the easier we’ll try to make this “kibitzing from the sidelines” for you, if you’re willing.)

  5. (Particularly relevant for large donations) You want aCFAR to remember you as a key contributor and to take a deep interest in where you’re coming from and how you and we can do something that is win-win for our [hopes and dreams and hypotheses and what’s worth trying in the world] and yours. (Plus you sense the potential for collaboration.)

I’m particularly wary of donations stemming from:

  1. You’re an EA, and are hoping to donate dollars to a thing that others have already verified is an efficient “input money, output saved lives or other obvious goods” machine.

To be clear, EA is an excellent way to donate; I’m glad some people donate this way; there’d be something seriously wrong with the world if nobody did this. But it’s not what this CFAR is doing. (More on this above.)

And in my opinion (and Michael Nielsen’s in this podcast with Ajeya Cotra, if you want a case at more length), there’d be something even more wrong with the world if most resource expenditure flowed via EA-like analysis.[15]

Another reason people used to sometimes donate, that IMO don’t apply to us today, and so would not be good reasons today:

  1. Trying to “raise the sanity waterline” for large sets of people (we tried this some in the past, yielding e.g. Julia Galef’s excellent book and some contributions to university classes; we have no active effort here now)

And a couple other reasons to donate:

  • You want this weird set of people (who’re having lots of impact on the world, for whatever reason: the rationality community and its many “adjacent” communities and people) to have enough total community infrastructure. (And you think we help that, and don’t much harm that.)

  • You want better eyesight on what happened to the hopes of the original rationalist project, and you think [this particular attempt at “let’s try this again, with a more transparent conversation this time”] will give us all some of the light we need

Ways to help CFAR or to connect to CFAR besides donating:

There are several good ways to help CFAR financially besides donating. You can:

  • Come to a workshop (or help a friend realize they’d enjoy the workshop, if they would)

  • Book our venue (or help a friend realize they’d enjoy booking the venue, if they would)

  • Sign up for our online test sessions to help us develop our material

  • Try our coaching (for yourself or for a friend).

There are also a pile of ways to help this CFAR and our mission non-financially. (Most of the resources we run on are non-financial, and are shared with us by hopeful rationality fans). Basically: kibitz with us here, or in a test session, or at a workshop. Attending a workshop helps even if you come on full scholarship a lot of the time, as having varied, cool participants makes our workshops more perspectives-rich and generative.)

For bonus points, maybe come to a workshop and then write up something substantial about it on LessWrong. (Scholarships are available for this purpose sometimes.)

Perks for donating

If you donate before Jan 31, you’ll also get, if you want:

  • A CFAR sticker pack (for donations ≥ $20)

  • A CFAR T-shirt, with our logo plus “don’t believe everything you think” (for donations ≥ $200)

  • An invitation to a “CFAR donors” party at our Bodega Bay venue in February, with drinks, lightning talks, etc (for donations ≥ $200)

  • We take you out to lunch (if geography can be navigated), try to understand how you’ve been able to do the cool things you’ve been able to do, and discuss the coolest parts of you that we can see in a Shortform LW post (that can mention you by name, or not) and an internal colloquium talk you can attend and kibitz in. (Or, we do this with a particular book that you love and recommend to us.) (for donations ≥ $5k)

Also, if there’s something in particular you’d like CFAR to be able to do, such as run workshops in a particular city or run an alumni event focusing on a particular component of rationality, and you’re considering a more substantial donation, please reach out (you can book a meeting via calendly, or email donate@rationality.org).

To the conversation!

Thank you for your curiosity about CFAR, and for reading (at least some of) this post! I hope you introduce yourself in the comments and that – if you end up donating (or kibitzing, or attending a workshop, or getting involved in us in whatever way) – it ends up part of a thing that’s actually good for you and the contexts you care about. And that you and we learn something together.

Yours in aspiring rationality,
Anna and aCFAR

  1. ^

    ‘aCFAR’ stands for “a Center For Applied Rationality.” We adopted the ‘a’ part recently, because calling ourselves ‘the’ Center for Applied Rationality seems obviously wrong. But feel free not to bother with the ‘a’ if it’s too annoying. I personally say ‘a’ when I feel like it.

  2. ^

    One of the best ways to get to know someone is to team up on something concrete; kibitzing on a current CFAR stuck point is my suggestion for how to try a little of that between you and aCFAR.

  3. ^

    Thanks to Davis Kingsley, John Salvatier, Paola Baca and Zvi Mowshowitz for writing help. (Particularly Davis Kingsley, who discussed practically every sentence, revised many, and made the whole thing far more readable.) Thanks to Jack Carroll for photos. Thanks to Zack Davis and Claude Code for creating the thermometer graphic up top. Remaining errors, wrong opinions, etc. are of course all mine.

  4. ^

    My mom wanted to teach her kids math, so we could be smart. And I wanted… to be like her… which meant I also wanted to teach myself/​others math! :) (Rather than, say, wanting to learn math.) Rationality education gives me an even better chance to see the gears of thinking/​updating.

  5. ^

    This overcounts a bit since this number is based on totaling the attendee count of many different programs and some people attended multiple programs, so the number of unique individuals who attended CFAR programs is lower than this.

  6. ^

    EA spaces were receiving large influxes of new people at the time, and I hoped CFAR workshops could help the EA and rationality communities to assimilate the large waves of new people with less dilution of what made these spaces awesome. (Lightcone has mostly taken over the “develop and spread useful vocabulary, and acculturate newcomers” role in recent years, and has done it spectacularly IMO.)

  7. ^

    Unlike some bands, we didn’t have substance abuse. But, like traveling bands, we traveled a lot to do high intensity soul-bearing stuff in a context where we were often exhausted but “the show must go on.” I believe many of us, and many of our working relationships, got traveling-band-like scars. Also, we had ourselves a roster of potentially-kinda-invasive “CFAR techniques”; in hindsight some of our uses of these seem unwholesome to me. (I think these techniques are neat when used freely by an autonomous person, but are iffy at best when used to “help” a colleague stretch themselves harder for a project one is oneself invested in.)

  8. ^

    There would still be many details to sort through. Eg, CFAR is aiming to be an unusually low-staff-charisma organization in which staff suggest exercises or whatever to participants in ways that’re unusually non-dizzying; Valentine’s native conversational style has a bit more charismatic oomph than we’re aiming for. But I love the idea of collaborating with Valentine on stuff about memes, PCK-seeking, what sorts of systematicity might allow decent epistemics, etc.. I also like the idea of having one more person who’s been around from the beginning, and has seen both CFAR’s early generativity and our failure modes, keeping an eye out.

  9. ^

    We would also try to find other ways to make money, and tinker/​brainstorm broadly.

  10. ^

    For instance, mainstream media and academia both have much less credibility and notably less money, the ACLU lost most of its vitality, many of the big organizations in EA space from 2015ish have either ceased to do much public leadership there or ceased existing altogether, and I would guess the trends in Bowling Alone have continued although I have not checked.

  11. ^

    It’s unlikely this would look like the venue generating more than its costs in direct booking revenue, but rather that the combination of booking revenue and cost savings for our own programs would exceed the costs of operating and maintaining the venue. Additionally we think the venue gives us a bunch of spirit and beauty, saves a bunch of staff time on logistics for each workshop we hold there, lets us support LARC and other groups we care about, and makes it easier for us to consider possible large expansions to our programs.

  12. ^

    There’s a lot of variability in what workshops end up looking like and there’s some reason to believe later workshops may generate more revenue, but we’re using November here as the most obvious basis for comparison.

  13. ^

    A term coined by Duncan meaning “students of our students” and which we continue to find useful in thinking about the impact of workshops and other programs.

  14. ^

    Lighthaven, the LW website, and other Lightcone-enabled social contexts are truly remarkable, IMO – one of the last bastions of general-purpose grounded truthseeking conversation on the internet. Many of you feel most at home there, and so should be sending such donations only to Lightcone. But some should perhaps put some or all of their ‘I want to support contexts that support people like me, or that support conversations I’ll feel at home near’ budget toward CFAR. Personally, I’m donating $10k to Lightcone and putting soul and work into aCFAR, and this leaves me personally feeling happier and more a part of things than if I were to skip either.

  15. ^

    Briefly: we humans are local creatures and we probably create better things, that contribute more to the long run, if we let ourselves have deep local interests and loyalties (to particular lines of research, to particular friendships and communities, to particular businesses or projects we are invested in) without trying to be always doing the thing that would be highest-impact for an detailless agent who happens to be us, and without trying to always be ready to change our plans and investments on a dime. I admit I’m caricaturing EA a bit, but I believe the point holds sans caricature; I would very much love to discuss this point at arbitrary length in the comment thread if you’re interested.