Seems very plausible to an outsider like me. But that still doesn’t excuse[1] the public communications around this.
The very earliest post directly about CFAR on this site is the following, containing this beautiful excerpt:
The Singularity Institute wants to spin off a separate rationality-related organization. (If it’s not obvious what this would do, it would e.g. develop things like the rationality katas as material for local meetups, high schools and colleges, bootcamps and seminars, have an annual conference and sessions in different cities and so on and so on.)
The founding principles of CFAR, as laid out by Anna Salamon, say:
We therefore aim to create a community with three key properties:
Competence—The ability to get things done in the real world. For example, the ability to work hard, follow through on plans, push past your fears, navigate social situations, organize teams of people, start and run successful businesses, etc.
Epistemic rationality—The ability to form relatively accurate beliefs. Especially the ability to form such beliefs in cases where data is limited, motivated cognition is tempting, or the conventional wisdom is incorrect.
Do-gooding—A desire to make the world better for all its people; the tendency to jump in and start/assist projects that might help (whether by labor or by donation); and ambition in keeping an eye out for projects that might help a lot and not just a little.
My experience with CFAR starts with its founding. I was part of the discussions on whether it would be worthwhile to create an organization dedicated to teaching rationality, how such an organization would be structured and what strategies it would use. We decided that the project was valuable enough to move forward, despite the large opportunity costs of doing so and high uncertainty about whether the project would succeed.
I attended an early CFAR workshop, partly to teach a class but mostly as a student. Things were still rough around the edges and in need of iterative improvement, but it was clear that the product was already valuable. There were many concepts I hadn’t encountered, or hadn’t previously understood or appreciated. In addition, spending a few days in an atmosphere dedicated to thinking about rationality skills and techniques, and socializing with others attending for that purpose that had been selected to attend, was wonderful and valuable as well. Such benefits should not be underestimated.
Sorry, to amend my statement about “wasn’t aimed at raising the sanity waterline of eg millions of people, only at teaching smaller sets”:
Way back when Eliezer wrote that post, we really were thinking of trying to raise the rationality of millions, or at least of hundreds of thousands, via clubs and schools and things. It was in the inital mix of visions. Eliezer spent time trying to write a sunk costs unit that could be read by someone who didn’t understand much rationality themselves, aloud to a meetup, and could cause the meetup to learn skills. We imagined maybe finding the kinds of donors who donated to art museums and getting them to donate to us instead so that we could eg nudge legislation they cared about by causing the citizenry to have better thinking skills.
However, by the time CFAR ran our first minicamps in 2012, or conducted our first fundraiser, our plans had mostly moved to “teach those who are unusually easy to teach via being willing and able to pay for workshops, practice, care, etc”. I prefered this partly because I liked getting the money from the customers we were trying to teach, so that they’d be who we were responsible to (fewer principle agent problems, compared to if someone with a political agenda wanted us to make other people think better; though I admit this is ironic given I now think there were some problems around us helping MIRI and being funded by AI risk donors while teaching some rationality hobbyists who weren’t necessarily looking for that). I also prefered it because I thought we knew how to run minicamps that would be good, and I didn’t have many good ideas for raising the sanity waterline more broadly.
We did do nonzero attempts at sanity waterline more broadly: Julia’s book, as mentioned elsewhere, but also, we collaborated a bit on a rationality class at UC Berkeley, tried to prioritize workshop applicants who seemed likely to teach others well (including giving them more financial aid), etc.
Seems very plausible to an outsider like me. But that still doesn’t excuse[1] the public communications around this.
The very earliest post directly about CFAR on this site is the following, containing this beautiful excerpt:
The founding principles of CFAR, as laid out by Anna Salamon, say:
Then Zvi says:
If you think there’s something to excuse! If you think there’s nothing wrong with what I’m laying out below… that’s your prerogative
Sorry, to amend my statement about “wasn’t aimed at raising the sanity waterline of eg millions of people, only at teaching smaller sets”:
Way back when Eliezer wrote that post, we really were thinking of trying to raise the rationality of millions, or at least of hundreds of thousands, via clubs and schools and things. It was in the inital mix of visions. Eliezer spent time trying to write a sunk costs unit that could be read by someone who didn’t understand much rationality themselves, aloud to a meetup, and could cause the meetup to learn skills. We imagined maybe finding the kinds of donors who donated to art museums and getting them to donate to us instead so that we could eg nudge legislation they cared about by causing the citizenry to have better thinking skills.
However, by the time CFAR ran our first minicamps in 2012, or conducted our first fundraiser, our plans had mostly moved to “teach those who are unusually easy to teach via being willing and able to pay for workshops, practice, care, etc”. I prefered this partly because I liked getting the money from the customers we were trying to teach, so that they’d be who we were responsible to (fewer principle agent problems, compared to if someone with a political agenda wanted us to make other people think better; though I admit this is ironic given I now think there were some problems around us helping MIRI and being funded by AI risk donors while teaching some rationality hobbyists who weren’t necessarily looking for that). I also prefered it because I thought we knew how to run minicamps that would be good, and I didn’t have many good ideas for raising the sanity waterline more broadly.
We did do nonzero attempts at sanity waterline more broadly: Julia’s book, as mentioned elsewhere, but also, we collaborated a bit on a rationality class at UC Berkeley, tried to prioritize workshop applicants who seemed likely to teach others well (including giving them more financial aid), etc.