It sounds way more like “raise the sanity waterline of smart people” than “raise the sanity waterline of the population at large”
Well, it wasn’t the former either. As Anna Salamon has said:
We were and are (from our founding in 2012 through the present) more focused on rationality education for fairly small sets of people who we thought might strongly benefit the world, e.g. by contributing to AI safety or other high-impact things, or by adding enrichment to a community that included such people. (Though with the notable exception of Julia writing the IMO excellent book “Scout Mindset,” which she started while at CFAR and which I suspect reached a somewhat larger audience.)
There’s an old (2021-ish?) Qiaochu Yuan Twitter thread about this. It was linked on this site at some point. I wish I could find it.[1]
Further information if anyone is kind/curious enough to try looking it up: I recall part of the very long thread, sometime near the beginning, was him expressing his frustration and sadness (in typical Qiaochu style, using highly emotive language) about the fact that they (i.e., CFAR) were running this math camp-style thing for high-performing Olympiad students and talking about how they were going to teach them general reasoning tips and other stuff having to do with rationality, but actually it was (in his telling, mind you) all about trying to manipulate these kids into doing AI safety research. And he was conflicted about what he felt were the impure motives of CFAR and the fact that they were advertising something false and not really trying to do the positive-vibes thing of helping them achieve their potential and find what they care about most, but instead essentially manipulating them into a specific shoehorned arena.
I think many of us, during many intention-minutes, had fairly sincere goals of raising the sanity of those who came to events, and took many actions backchained from these goals in a fairly sensible fashion. I also think I and some of us worked to: (a) bring to the event people who were unusually likely to help the world, such that raising their capability would help the world; (b) influence people who came to be more likely to do things we thought would help the world; and (c) draw people into particular patterns of meaning-making that made them easier to influence and control in these ways, although I wouldn’t have put it that way at the time, and I now think this was in tension with sanity-raising in ways I didn’t realize at the time.
I would still tend to call the sentence “we were trying to raise the sanity waterline of smart rationality hobbyists who were willing and able to pay for workshops and do practice and so on” basically true.
I also think we actually helped a bunch of people get a bunch of useful thinking skills, in ways that were hard and required actual work/iteration/attention/curiosity/etc (which we put in, over many years, successfully).
Part of the situation I think is also that different CFAR founders had somewhat different goals (I’d weakly guess Julia Galef actually did care more about “raise the broader sanity waterline”, but she also left a few years in), so there wasn’t quite a uniform vision to communicate.
Seems very plausible to an outsider like me. But that still doesn’t excuse[1] the public communications around this.
The very earliest post directly about CFAR on this site is the following, containing this beautiful excerpt:
The Singularity Institute wants to spin off a separate rationality-related organization. (If it’s not obvious what this would do, it would e.g. develop things like the rationality katas as material for local meetups, high schools and colleges, bootcamps and seminars, have an annual conference and sessions in different cities and so on and so on.)
The founding principles of CFAR, as laid out by Anna Salamon, say:
We therefore aim to create a community with three key properties:
Competence—The ability to get things done in the real world. For example, the ability to work hard, follow through on plans, push past your fears, navigate social situations, organize teams of people, start and run successful businesses, etc.
Epistemic rationality—The ability to form relatively accurate beliefs. Especially the ability to form such beliefs in cases where data is limited, motivated cognition is tempting, or the conventional wisdom is incorrect.
Do-gooding—A desire to make the world better for all its people; the tendency to jump in and start/assist projects that might help (whether by labor or by donation); and ambition in keeping an eye out for projects that might help a lot and not just a little.
My experience with CFAR starts with its founding. I was part of the discussions on whether it would be worthwhile to create an organization dedicated to teaching rationality, how such an organization would be structured and what strategies it would use. We decided that the project was valuable enough to move forward, despite the large opportunity costs of doing so and high uncertainty about whether the project would succeed.
I attended an early CFAR workshop, partly to teach a class but mostly as a student. Things were still rough around the edges and in need of iterative improvement, but it was clear that the product was already valuable. There were many concepts I hadn’t encountered, or hadn’t previously understood or appreciated. In addition, spending a few days in an atmosphere dedicated to thinking about rationality skills and techniques, and socializing with others attending for that purpose that had been selected to attend, was wonderful and valuable as well. Such benefits should not be underestimated.
Sorry, to amend my statement about “wasn’t aimed at raising the sanity waterline of eg millions of people, only at teaching smaller sets”:
Way back when Eliezer wrote that post, we really were thinking of trying to raise the rationality of millions, or at least of hundreds of thousands, via clubs and schools and things. It was in the inital mix of visions. Eliezer spent time trying to write a sunk costs unit that could be read by someone who didn’t understand much rationality themselves, aloud to a meetup, and could cause the meetup to learn skills. We imagined maybe finding the kinds of donors who donated to art museums and getting them to donate to us instead so that we could eg nudge legislation they cared about by causing the citizenry to have better thinking skills.
However, by the time CFAR ran our first minicamps in 2012, or conducted our first fundraiser, our plans had mostly moved to “teach those who are unusually easy to teach via being willing and able to pay for workshops, practice, care, etc”. I prefered this partly because I liked getting the money from the customers we were trying to teach, so that they’d be who we were responsible to (fewer principle agent problems, compared to if someone with a political agenda wanted us to make other people think better; though I admit this is ironic given I now think there were some problems around us helping MIRI and being funded by AI risk donors while teaching some rationality hobbyists who weren’t necessarily looking for that). I also prefered it because I thought we knew how to run minicamps that would be good, and I didn’t have many good ideas for raising the sanity waterline more broadly.
We did do nonzero attempts at sanity waterline more broadly: Julia’s book, as mentioned elsewhere, but also, we collaborated a bit on a rationality class at UC Berkeley, tried to prioritize workshop applicants who seemed likely to teach others well (including giving them more financial aid), etc.
I agree, although I also think we ran with this where it was convenient instead of hashing it out properly (like, we asked “what can we say that’ll sound good and be true” when writing fundraiser posts, rather than “what are we up for committing to in a way that will build a high-integrity relationship with whichever community we actually want to serve, and will let any other communities who we don’t want to serve realize that and stop putting their hopes in us.”)
It seems to me that at least while I worked there (2017-2021), CFAR did try to hash this out properly many times, we just largely failed to converge. I think we had a bunch of employees/workshop staff over the years who were in fact aiming largely or even primarily to raise the sanity waterline, just in various/often-idiosyncratic ways.
Well, it wasn’t the former either. As Anna Salamon has said:
There’s an old (2021-ish?) Qiaochu Yuan Twitter thread about this. It was linked on this site at some point. I wish I could find it.[1]
Further information if anyone is kind/curious enough to try looking it up: I recall part of the very long thread, sometime near the beginning, was him expressing his frustration and sadness (in typical Qiaochu style, using highly emotive language) about the fact that they (i.e., CFAR) were running this math camp-style thing for high-performing Olympiad students and talking about how they were going to teach them general reasoning tips and other stuff having to do with rationality, but actually it was (in his telling, mind you) all about trying to manipulate these kids into doing AI safety research. And he was conflicted about what he felt were the impure motives of CFAR and the fact that they were advertising something false and not really trying to do the positive-vibes thing of helping them achieve their potential and find what they care about most, but instead essentially manipulating them into a specific shoehorned arena.
I think many of us, during many intention-minutes, had fairly sincere goals of raising the sanity of those who came to events, and took many actions backchained from these goals in a fairly sensible fashion. I also think I and some of us worked to: (a) bring to the event people who were unusually likely to help the world, such that raising their capability would help the world; (b) influence people who came to be more likely to do things we thought would help the world; and (c) draw people into particular patterns of meaning-making that made them easier to influence and control in these ways, although I wouldn’t have put it that way at the time, and I now think this was in tension with sanity-raising in ways I didn’t realize at the time.
I would still tend to call the sentence “we were trying to raise the sanity waterline of smart rationality hobbyists who were willing and able to pay for workshops and do practice and so on” basically true.
I also think we actually helped a bunch of people get a bunch of useful thinking skills, in ways that were hard and required actual work/iteration/attention/curiosity/etc (which we put in, over many years, successfully).
Part of the situation I think is also that different CFAR founders had somewhat different goals (I’d weakly guess Julia Galef actually did care more about “raise the broader sanity waterline”, but she also left a few years in), so there wasn’t quite a uniform vision to communicate.
Seems very plausible to an outsider like me. But that still doesn’t excuse[1] the public communications around this.
The very earliest post directly about CFAR on this site is the following, containing this beautiful excerpt:
The founding principles of CFAR, as laid out by Anna Salamon, say:
Then Zvi says:
If you think there’s something to excuse! If you think there’s nothing wrong with what I’m laying out below… that’s your prerogative
Sorry, to amend my statement about “wasn’t aimed at raising the sanity waterline of eg millions of people, only at teaching smaller sets”:
Way back when Eliezer wrote that post, we really were thinking of trying to raise the rationality of millions, or at least of hundreds of thousands, via clubs and schools and things. It was in the inital mix of visions. Eliezer spent time trying to write a sunk costs unit that could be read by someone who didn’t understand much rationality themselves, aloud to a meetup, and could cause the meetup to learn skills. We imagined maybe finding the kinds of donors who donated to art museums and getting them to donate to us instead so that we could eg nudge legislation they cared about by causing the citizenry to have better thinking skills.
However, by the time CFAR ran our first minicamps in 2012, or conducted our first fundraiser, our plans had mostly moved to “teach those who are unusually easy to teach via being willing and able to pay for workshops, practice, care, etc”. I prefered this partly because I liked getting the money from the customers we were trying to teach, so that they’d be who we were responsible to (fewer principle agent problems, compared to if someone with a political agenda wanted us to make other people think better; though I admit this is ironic given I now think there were some problems around us helping MIRI and being funded by AI risk donors while teaching some rationality hobbyists who weren’t necessarily looking for that). I also prefered it because I thought we knew how to run minicamps that would be good, and I didn’t have many good ideas for raising the sanity waterline more broadly.
We did do nonzero attempts at sanity waterline more broadly: Julia’s book, as mentioned elsewhere, but also, we collaborated a bit on a rationality class at UC Berkeley, tried to prioritize workshop applicants who seemed likely to teach others well (including giving them more financial aid), etc.
I agree, although I also think we ran with this where it was convenient instead of hashing it out properly (like, we asked “what can we say that’ll sound good and be true” when writing fundraiser posts, rather than “what are we up for committing to in a way that will build a high-integrity relationship with whichever community we actually want to serve, and will let any other communities who we don’t want to serve realize that and stop putting their hopes in us.”)
But I agree re: Julia.
It seems to me that at least while I worked there (2017-2021), CFAR did try to hash this out properly many times, we just largely failed to converge. I think we had a bunch of employees/workshop staff over the years who were in fact aiming largely or even primarily to raise the sanity waterline, just in various/often-idiosyncratic ways.