I’m an admin of this site; I work full-time on trying to help people on LessWrong refine the art of human rationality. (Longer bio.)
I generally feel more hopeful about a situation when I understand it better.
I’m an admin of this site; I work full-time on trying to help people on LessWrong refine the art of human rationality. (Longer bio.)
I generally feel more hopeful about a situation when I understand it better.
I feel like one of the most valuable things we have on LessWrong is a broad, shared epistemic framework, ideas with which we can take steps through concept-space together and reach important conclusions more efficiently than other intellectual spheres e.g. ideas about decision theory, ideas about overcoming coordination problems, etc. I believe all of the founding staff of CFAR had read the sequences and were versed in things like what it means to ask where you got your bits of evidence from, that correctly updating on the evidence has a formal meaning, and had absorbed a model of Eliezer’s law-based approach to reasoning about your mind and the world.
In recent years, when I’ve been at CFAR events, I generally feel like at least 25% of attendees probably haven’t read The Sequences, aren’t part of this shared epistemic framework, and don’t have an understanding of that law-based approach, and that they don’t have a felt need to cache out their models of the world into explicit reasoning and communicable models that others can build on. I also have felt this way increasingly about CFAR staff over the years (e.g. it’s not clear to me whether all current CFAR staff have read The Sequences). And to be clear, I think if you don’t have a shared epistemic framework, you often just can’t talk to each other very well about things that aren’t highly empirical, certainly not at the scale of more than like 10-20 people.
So I’ve been pretty confused by why Anna and other staff haven’t seemed to think this is very important when designing the intellectual environment at CFAR events. I’m interested to know how you think about this?
I certainly think a lot of valuable introspection and modelling work still happens at CFAR events, I know I personally find it useful, and I think that e.g. CFAR has done a good job in stealing useful things from the circling people (I wrote about my positive experiences circling here). But my sense for a number of the attendees is that even if they keep introspecting and finding out valuable things about themselves, 5 years from now they will not have anything to add to our collective knowledge-base (e.g. by writing a LW sequence that LWers can understand and get value from), even to a LW audience who considers all bayesian evidence admissible even if it’s weird or unusual, because they were never trying to think in a way that could be communicated in that fashion. The Gwerns and the Wei Dais and the Scott Alexanders of the world won’t have learned anything from CFAR’s exploration.
As an example of this, Val (who was a cofounder but doesn’t work at CFAR any more) seemed genuinely confused when Oli asked for third-party verifiable evidence for the success of Val’s ideas about introspection. Oli explained that there was a lemons problem (i.e. information asymmetry) when Val claimed that a mental technique has changed his life radically, when all of the evidence he offers is of the kind “I feel so much better” and “my relationships have massively improved” and so on. (See Scott’s Review of All Therapy Books for more of what I mean here, though I think this is a pretty standard idea.) He seemed genuinely confused why Oli was asking for third-party verifiable evidence, and seemed genuinely surprised that claims like “This last September, I experienced enlightenment. I mean to share this as a simple fact to set context” would be met with a straight “I don’t believe you.” This was really worrying to me, and it’s always been surprising to me that this part of him fit naturally into CFAR’s environment and that CFAR’s natural antibodies weren’t kicking against it hard.
To be clear, I think several of Val’s posts in that sequence were pretty great (e.g. The Intelligent Social Web is up for the 2018 LW review, and you can see Jacob Falkovich’s review on how the post changed his life), and I’ve personally had some very valuable experiences with Val at CFAR events, but I expect, had he continued in this vein at CFAR, that over time Val would just stop being able to communicate with LWers, and drift into his own closed epistemic bubble, and to a substantial degree pull CFAR with him. I feel similarly about many attendees at CFAR events, although fewer since Val left. I never talked to Pete Michaud very much, and while I think he seemed quite emotionally mature (I mean that sincerely) he seemed primarily interested in things to do with authentic relating and circling, and again I didn’t get many signs that he understood why building explicit models or a communal record of insights and ideas was important, and because of this it was really weird to me that he was executive director for a few years.
To put it another way, I feel like CFAR has in some ways given up on the goals of science, and moved toward the goals of a private business, whereby you do some really valuable things yourself when you’re around, and create a lot of value, but all the knowledge you gain about about building a company, about your market, about markets in general, and more, isn’t very communicable, and isn’t passed on in the public record for other people to build on (e.g. see the difference between how all scientists are in a race to be first to add their ideas to the public domain, whereas Apple primarily makes everyone sign NDAs and not let out any information other than releasing their actual products, and I expect Apple will take most of their insights to the grave).
Just zooming in on this, which stood out to me personally as a particular thing I’m really tired of.
If you’re not disagreeing with people about important things then you’re not thinking. There are many options for how to negotiate a significant disagreement with a colleague, including spending lots of time arguing about it, finding a compromise action, or stopping collaborating with the person (if it’s a severe disagreement, which often it can be). But telling someone that by disagreeing they’re claiming to be ‘better’ than another person in some way always feels to me like an attempt to ‘control’ the speech and behavior of the person you’re talking to, and I’m against it.
It happens a lot. I recently overheard someone (who I’d not met before) telling Eliezer Yudkowsky that he’s not allowed to have extreme beliefs about AGI outcomes. I don’t recall the specific claim, just that EY’s probability mass for the claim was in the 95-99% range. The person argued that because EY disagrees with some other thoughtful people on that question, he shouldn’t have such confidence.
(At the time, I noticed I didn’t have to be around or listen to that person and just wandered away. Poor Eliezer stayed and tried to give a thoughtful explanation for why the argument seemed bad.)
I noticed this too. I thought a bunch of people were affected by it in a sort of herd behavior way (not focused so much on MIRI/CFAR, I’m talking more broadly in the rationality/EA communities). I do think key parts of the arguments about how to think about timelines and takeoff are accurate (e.g. 1, 2), but I feel like many people weren’t making decisions because of reasons; instead they noticed their ‘leaders’ were acting scared and then they also acted scared, like a herd.
In both the Leverage situation and the AI timelines situation, I felt like nobody involved was really appreciating how much fuckery the information siloing was going to cause (and did cause) to the way the individuals in the ecosystem made decisions.
This was one of the main motivations behind my choice of example in the opening section of my 3.5 yr old post A Sketch of Good Communication btw (a small thing but still meant to openly disagree with the seeming consensus that timelines determined everything). And then later I wrote about the social dynamics a bunch more 2yrs ago when trying to expand on someone else’s question on the topic.