When we make decisions about the things we really care about — like our health, our families, our jobs, or the world at large — we tell ourselves, “I really thought this through. I did the best I could, right?”
But careful thinking just isn’t enough to understand our minds’ hidden failures. Over the past fifty years, science has discovered common human error patterns — cognitive biases — whereby people of all levels of education and intelligence will misjudge reality, fail to achieve their goals, and make all kinds of self-defeating mistakes. And these biases are so basic and pervasive to human thinking that we’re all making these mistakes every day without even noticing. So what can be done?
Thankfully, careful thinking is no longer the best we can do. By taking lessons from science about the very foundations of human intuition, we can begin patching the problems and find new ways to engage our strengths. We can do better.
And that’s why CFAR exists: to translate research into practice, turn cognitive science into cognitive technology, and bring the fruits of experimental psychology to bear for individuals and the world. We turn mathematical and empirical insights about the human mind into mental exercises that train the everyday skills of making accurate predictions, avoiding self-deception, and getting your motivation where your arithmetic says it should be. And we select and improve our exercises through rapidly iterated testing sessions, through our workshops, and through long-term follow-ups we’re conducting on training with randomized admissions.
This sure sounds a lot like “raise the sanity waterline of the population at large”.
CFAR is devoted to teaching those techniques, and the math and science behind them, to adults and exceptional youth. In the process, we’re breaking new ground in studying the long-term effects of rationality training on life outcomes using randomized controlled trials. We’re contributing to pedagogical knowledge about how to teach this emerging discipline at universities and elsewhere. And we’re building a real-life community of tens of thousands of students, entrepreneurs, researchers, programmers, philanthropists, and others who are passionate about using rationality to improve the decisions they make for themselves and for the world.
Helping CFAR means giving aspiring young rationalists the tools to make thoughtful, well-reasoned decisions not just for their own lives, but for the economic, political and technological future of our society.
This definitely, definitely sounds like “raise the sanity waterline of the population at large”.
It seems almost impossible to read this text and not conclude that “raise the sanity waterline of the population at large” was exactly CFAR’s goal. If in fact that was not their goal, then their public statements would seem to have been deceptive in a way that seems very unlikely to have been accidental.
IMO, our goal was to raise the sanity of particular smallish groups who attended workshops, but wasn’t very much to have effects on millions or billions (we would’ve been in favor of that, but most of us mostly didn’t think we had enough shot to try backchaining from that). Usually when people say “raise the sanity waterline” I interpret them as discussing stuff that happens to millions.
I agree the “tens of thousands” in the quoted passage is more than was attending workshops, and so pulls somewhat against my claim.
I do think our public statements were deceptive, in a fairly common but nevertheless bad way, in that we had many conflicting visions, tended to avoid contradicting people who thought we were gonna do all the good things that at least some of us had at least some desire/hope to do, and we tended in our public statements/fundraisers to try to avoid alienating all those hopes, as opposed to the higher-integrity / more honorable approach of trying to come to a coherent view of which priorities we prioritized how much and trying to help people not have unrealistic hopes in us, and not have inaccurate views of our priorities.
we tended in our public statements/fundraisers to try to avoid alienating all those hopes, as opposed to the higher-integrity / more honorable approach of trying to come to a coherent view of which priorities we prioritized how much and trying to help people not have unrealistic hopes in us, and not have inaccurate views of our priorities
I… wish to somewhat-defensively note, fwiw, that I do not believe this well-describes my own attempts to publicly communicate on behalf of CFAR. Speaking on behalf of orgs is difficult, and I make no claim to have fully succeeded at avoiding the cognitive bases/self-serving errors/etc. such things incentivize. But I certainly earnestly tried, to what I think was (even locally) an unusual degree, to avoid such dishonesty.
(I feel broadly skeptical the rest of the org’s communication is well-described in these terms either, including nearly all of yours Anna, but ofc I can speak most strongly about my own mind/behavior).
It sounds way more like “raise the sanity waterline of smart people” than “raise the sanity waterline of the population at large”. If they wanted to raise the sanity waterline of the population at large, they’d be writing books for high- and middle-schoolers.
It sounds way more like “raise the sanity waterline of smart people” than “raise the sanity waterline of the population at large”
Well, it wasn’t the former either. As Anna Salamon has said:
We were and are (from our founding in 2012 through the present) more focused on rationality education for fairly small sets of people who we thought might strongly benefit the world, e.g. by contributing to AI safety or other high-impact things, or by adding enrichment to a community that included such people. (Though with the notable exception of Julia writing the IMO excellent book “Scout Mindset,” which she started while at CFAR and which I suspect reached a somewhat larger audience.)
There’s an old (2021-ish?) Qiaochu Yuan Twitter thread about this. It was linked on this site at some point. I wish I could find it.[1]
Further information if anyone is kind/curious enough to try looking it up: I recall part of the very long thread, sometime near the beginning, was him expressing his frustration and sadness (in typical Qiaochu style, using highly emotive language) about the fact that they (i.e., CFAR) were running this math camp-style thing for high-performing Olympiad students and talking about how they were going to teach them general reasoning tips and other stuff having to do with rationality, but actually it was (in his telling, mind you) all about trying to manipulate these kids into doing AI safety research. And he was conflicted about what he felt were the impure motives of CFAR and the fact that they were advertising something false and not really trying to do the positive-vibes thing of helping them achieve their potential and find what they care about most, but instead essentially manipulating them into a specific shoehorned arena.
I think many of us, during many intention-minutes, had fairly sincere goals of raising the sanity of those who came to events, and took many actions backchained from these goals in a fairly sensible fashion. I also think I and some of us worked to: (a) bring to the event people who were unusually likely to help the world, such that raising their capability would help the world; (b) influence people who came to be more likely to do things we thought would help the world; and (c) draw people into particular patterns of meaning-making that made them easier to influence and control in these ways, although I wouldn’t have put it that way at the time, and I now think this was in tension with sanity-raising in ways I didn’t realize at the time.
I would still tend to call the sentence “we were trying to raise the sanity waterline of smart rationality hobbyists who were willing and able to pay for workshops and do practice and so on” basically true.
I also think we actually helped a bunch of people get a bunch of useful thinking skills, in ways that were hard and required actual work/iteration/attention/curiosity/etc (which we put in, over many years, successfully).
Part of the situation I think is also that different CFAR founders had somewhat different goals (I’d weakly guess Julia Galef actually did care more about “raise the broader sanity waterline”, but she also left a few years in), so there wasn’t quite a uniform vision to communicate.
Seems very plausible to an outsider like me. But that still doesn’t excuse[1] the public communications around this.
The very earliest post directly about CFAR on this site is the following, containing this beautiful excerpt:
The Singularity Institute wants to spin off a separate rationality-related organization. (If it’s not obvious what this would do, it would e.g. develop things like the rationality katas as material for local meetups, high schools and colleges, bootcamps and seminars, have an annual conference and sessions in different cities and so on and so on.)
The founding principles of CFAR, as laid out by Anna Salamon, say:
We therefore aim to create a community with three key properties:
Competence—The ability to get things done in the real world. For example, the ability to work hard, follow through on plans, push past your fears, navigate social situations, organize teams of people, start and run successful businesses, etc.
Epistemic rationality—The ability to form relatively accurate beliefs. Especially the ability to form such beliefs in cases where data is limited, motivated cognition is tempting, or the conventional wisdom is incorrect.
Do-gooding—A desire to make the world better for all its people; the tendency to jump in and start/assist projects that might help (whether by labor or by donation); and ambition in keeping an eye out for projects that might help a lot and not just a little.
My experience with CFAR starts with its founding. I was part of the discussions on whether it would be worthwhile to create an organization dedicated to teaching rationality, how such an organization would be structured and what strategies it would use. We decided that the project was valuable enough to move forward, despite the large opportunity costs of doing so and high uncertainty about whether the project would succeed.
I attended an early CFAR workshop, partly to teach a class but mostly as a student. Things were still rough around the edges and in need of iterative improvement, but it was clear that the product was already valuable. There were many concepts I hadn’t encountered, or hadn’t previously understood or appreciated. In addition, spending a few days in an atmosphere dedicated to thinking about rationality skills and techniques, and socializing with others attending for that purpose that had been selected to attend, was wonderful and valuable as well. Such benefits should not be underestimated.
Sorry, to amend my statement about “wasn’t aimed at raising the sanity waterline of eg millions of people, only at teaching smaller sets”:
Way back when Eliezer wrote that post, we really were thinking of trying to raise the rationality of millions, or at least of hundreds of thousands, via clubs and schools and things. It was in the inital mix of visions. Eliezer spent time trying to write a sunk costs unit that could be read by someone who didn’t understand much rationality themselves, aloud to a meetup, and could cause the meetup to learn skills. We imagined maybe finding the kinds of donors who donated to art museums and getting them to donate to us instead so that we could eg nudge legislation they cared about by causing the citizenry to have better thinking skills.
However, by the time CFAR ran our first minicamps in 2012, or conducted our first fundraiser, our plans had mostly moved to “teach those who are unusually easy to teach via being willing and able to pay for workshops, practice, care, etc”. I prefered this partly because I liked getting the money from the customers we were trying to teach, so that they’d be who we were responsible to (fewer principle agent problems, compared to if someone with a political agenda wanted us to make other people think better; though I admit this is ironic given I now think there were some problems around us helping MIRI and being funded by AI risk donors while teaching some rationality hobbyists who weren’t necessarily looking for that). I also prefered it because I thought we knew how to run minicamps that would be good, and I didn’t have many good ideas for raising the sanity waterline more broadly.
We did do nonzero attempts at sanity waterline more broadly: Julia’s book, as mentioned elsewhere, but also, we collaborated a bit on a rationality class at UC Berkeley, tried to prioritize workshop applicants who seemed likely to teach others well (including giving them more financial aid), etc.
I agree, although I also think we ran with this where it was convenient instead of hashing it out properly (like, we asked “what can we say that’ll sound good and be true” when writing fundraiser posts, rather than “what are we up for committing to in a way that will build a high-integrity relationship with whichever community we actually want to serve, and will let any other communities who we don’t want to serve realize that and stop putting their hopes in us.”)
It seems to me that at least while I worked there (2017-2021), CFAR did try to hash this out properly many times, we just largely failed to converge. I think we had a bunch of employees/workshop staff over the years who were in fact aiming largely or even primarily to raise the sanity waterline, just in various/often-idiosyncratic ways.
Helping CFAR means giving aspiring young rationalists the tools to make thoughtful, well-reasoned decisions not just for their own lives, but for the economic, political and technological future of our society.
Sounds exactly not like “raising the sanity waterline”. It sounds like “we are going to raise the sanity of a relatively small number of people (namely young aspiring rationalists), who will then benefit the broader world (via unspecified mechanisms, but which by implication seem likely to include doing good things on AI safety, or developing other helpful new technologies, etc.)”.
And the rest of what I quoted? And that paragraph in the context of the rest of what I quoted? And everything that @sunwillrise has quoted elsewhere in this subthread? Does none of that sound like “raising the sanity waterline” to you either? (Even the parts where e.g. Ben Pace talks about “massively increas[ing] the sanity waterline on a global scale”?)
Also, I’m taking a look at CFAR’s website from early in its history… here’s some quotes:
This sure sounds a lot like “raise the sanity waterline of the population at large”.
CFAR’s vision:
EDIT: See also CFAR’s initiatives, as well as their “Donate” page, where they wrote:
This definitely, definitely sounds like “raise the sanity waterline of the population at large”.
It seems almost impossible to read this text and not conclude that “raise the sanity waterline of the population at large” was exactly CFAR’s goal. If in fact that was not their goal, then their public statements would seem to have been deceptive in a way that seems very unlikely to have been accidental.
IMO, our goal was to raise the sanity of particular smallish groups who attended workshops, but wasn’t very much to have effects on millions or billions (we would’ve been in favor of that, but most of us mostly didn’t think we had enough shot to try backchaining from that). Usually when people say “raise the sanity waterline” I interpret them as discussing stuff that happens to millions.
I agree the “tens of thousands” in the quoted passage is more than was attending workshops, and so pulls somewhat against my claim.
I do think our public statements were deceptive, in a fairly common but nevertheless bad way, in that we had many conflicting visions, tended to avoid contradicting people who thought we were gonna do all the good things that at least some of us had at least some desire/hope to do, and we tended in our public statements/fundraisers to try to avoid alienating all those hopes, as opposed to the higher-integrity / more honorable approach of trying to come to a coherent view of which priorities we prioritized how much and trying to help people not have unrealistic hopes in us, and not have inaccurate views of our priorities.
I… wish to somewhat-defensively note, fwiw, that I do not believe this well-describes my own attempts to publicly communicate on behalf of CFAR. Speaking on behalf of orgs is difficult, and I make no claim to have fully succeeded at avoiding the cognitive bases/self-serving errors/etc. such things incentivize. But I certainly earnestly tried, to what I think was (even locally) an unusual degree, to avoid such dishonesty.
(I feel broadly skeptical the rest of the org’s communication is well-described in these terms either, including nearly all of yours Anna, but ofc I can speak most strongly about my own mind/behavior).
It sounds way more like “raise the sanity waterline of smart people” than “raise the sanity waterline of the population at large”. If they wanted to raise the sanity waterline of the population at large, they’d be writing books for high- and middle-schoolers.
Well, it wasn’t the former either. As Anna Salamon has said:
There’s an old (2021-ish?) Qiaochu Yuan Twitter thread about this. It was linked on this site at some point. I wish I could find it.[1]
Further information if anyone is kind/curious enough to try looking it up: I recall part of the very long thread, sometime near the beginning, was him expressing his frustration and sadness (in typical Qiaochu style, using highly emotive language) about the fact that they (i.e., CFAR) were running this math camp-style thing for high-performing Olympiad students and talking about how they were going to teach them general reasoning tips and other stuff having to do with rationality, but actually it was (in his telling, mind you) all about trying to manipulate these kids into doing AI safety research. And he was conflicted about what he felt were the impure motives of CFAR and the fact that they were advertising something false and not really trying to do the positive-vibes thing of helping them achieve their potential and find what they care about most, but instead essentially manipulating them into a specific shoehorned arena.
I think many of us, during many intention-minutes, had fairly sincere goals of raising the sanity of those who came to events, and took many actions backchained from these goals in a fairly sensible fashion. I also think I and some of us worked to: (a) bring to the event people who were unusually likely to help the world, such that raising their capability would help the world; (b) influence people who came to be more likely to do things we thought would help the world; and (c) draw people into particular patterns of meaning-making that made them easier to influence and control in these ways, although I wouldn’t have put it that way at the time, and I now think this was in tension with sanity-raising in ways I didn’t realize at the time.
I would still tend to call the sentence “we were trying to raise the sanity waterline of smart rationality hobbyists who were willing and able to pay for workshops and do practice and so on” basically true.
I also think we actually helped a bunch of people get a bunch of useful thinking skills, in ways that were hard and required actual work/iteration/attention/curiosity/etc (which we put in, over many years, successfully).
Part of the situation I think is also that different CFAR founders had somewhat different goals (I’d weakly guess Julia Galef actually did care more about “raise the broader sanity waterline”, but she also left a few years in), so there wasn’t quite a uniform vision to communicate.
Seems very plausible to an outsider like me. But that still doesn’t excuse[1] the public communications around this.
The very earliest post directly about CFAR on this site is the following, containing this beautiful excerpt:
The founding principles of CFAR, as laid out by Anna Salamon, say:
Then Zvi says:
If you think there’s something to excuse! If you think there’s nothing wrong with what I’m laying out below… that’s your prerogative
Sorry, to amend my statement about “wasn’t aimed at raising the sanity waterline of eg millions of people, only at teaching smaller sets”:
Way back when Eliezer wrote that post, we really were thinking of trying to raise the rationality of millions, or at least of hundreds of thousands, via clubs and schools and things. It was in the inital mix of visions. Eliezer spent time trying to write a sunk costs unit that could be read by someone who didn’t understand much rationality themselves, aloud to a meetup, and could cause the meetup to learn skills. We imagined maybe finding the kinds of donors who donated to art museums and getting them to donate to us instead so that we could eg nudge legislation they cared about by causing the citizenry to have better thinking skills.
However, by the time CFAR ran our first minicamps in 2012, or conducted our first fundraiser, our plans had mostly moved to “teach those who are unusually easy to teach via being willing and able to pay for workshops, practice, care, etc”. I prefered this partly because I liked getting the money from the customers we were trying to teach, so that they’d be who we were responsible to (fewer principle agent problems, compared to if someone with a political agenda wanted us to make other people think better; though I admit this is ironic given I now think there were some problems around us helping MIRI and being funded by AI risk donors while teaching some rationality hobbyists who weren’t necessarily looking for that). I also prefered it because I thought we knew how to run minicamps that would be good, and I didn’t have many good ideas for raising the sanity waterline more broadly.
We did do nonzero attempts at sanity waterline more broadly: Julia’s book, as mentioned elsewhere, but also, we collaborated a bit on a rationality class at UC Berkeley, tried to prioritize workshop applicants who seemed likely to teach others well (including giving them more financial aid), etc.
I agree, although I also think we ran with this where it was convenient instead of hashing it out properly (like, we asked “what can we say that’ll sound good and be true” when writing fundraiser posts, rather than “what are we up for committing to in a way that will build a high-integrity relationship with whichever community we actually want to serve, and will let any other communities who we don’t want to serve realize that and stop putting their hopes in us.”)
But I agree re: Julia.
It seems to me that at least while I worked there (2017-2021), CFAR did try to hash this out properly many times, we just largely failed to converge. I think we had a bunch of employees/workshop staff over the years who were in fact aiming largely or even primarily to raise the sanity waterline, just in various/often-idiosyncratic ways.
No, this:
Sounds exactly not like “raising the sanity waterline”. It sounds like “we are going to raise the sanity of a relatively small number of people (namely young aspiring rationalists), who will then benefit the broader world (via unspecified mechanisms, but which by implication seem likely to include doing good things on AI safety, or developing other helpful new technologies, etc.)”.
And the rest of what I quoted? And that paragraph in the context of the rest of what I quoted? And everything that @sunwillrise has quoted elsewhere in this subthread? Does none of that sound like “raising the sanity waterline” to you either? (Even the parts where e.g. Ben Pace talks about “massively increas[ing] the sanity waterline on a global scale”?)