In recent years, when I’ve been at CFAR events, I generally feel like at least 25% of attendees probably haven’t read The Sequences, aren’t part of this shared epistemic framework, and don’t have any understanding of that law-based approach, and that they don’t have a felt need to cache out their models of the world into explicit reasoning and communicable models that others can build on.
The “many alumni haven’t read the Sequences” part has actually been here since very near the beginning (not the initial 2012 minicamps, but the very first paid workshops of 2013 and later). (CFAR began in Jan 2012.) You can see it in our old end-of-2013 fundraiser post, where we wrote “Initial workshops worked only for those who had already read the LW Sequences. Today, workshop participants who are smart and analytical, but with no prior exposure to rationality—such as a local politician, a police officer, a Spanish teacher, and others—are by and large quite happy with the workshop and feel it is valuable.” We didn’t name this explicitly in that post, but part of the hope was to get the workshops to work for a slightly larger/broader/more cognitively diverse set than the set who for whom the original Sequences in their written form tended to spontaneously “click”.
As to the “aren’t part of this shared epistemic framework”—when I go to e.g. the alumni reunion, I do feel there are basic pieces of this framework at least that I can rely on. For example, even on contentious issues, 95%+ of alumni reunion participants seem to me to be pretty good at remembering that arguments should not be like soldiers, that beliefs are for true things, etc. -- there is to my eyes a very noticable positive difference between the folks at the alumni reunion and unselected-for-rationality smart STEM graduate students, say (though STEM graduate students are also notably more skilled than the general population at this, and though both groups fall short of perfection).
Still, I agree that it would be worthwhile to build more common knowledge and [whatever the “values” analog of common knowledge is called] supporting “a felt need to cache out their models of the world into explicit reasoning and communicable models that others can build on” and that are piecewise-checkable (rather than opaque masses of skills that are useful as a mass but hard to build across people and time). This particular piece of culture is harder to teach to folks who are seeking individual utility, because the most obvious payoffs are at the level of the group and of the long-term process rather than at the level of the individual (where the payoffs to e.g. goal-factoring and murphyjitsu are located). It also pays off more in later-stage fields and less in the earliest stages of science within preparadigm fields such as AI safety, where it’s often about shower thoughts and slowly following inarticulate hunches. But still.
Ben Pace writes:
The “many alumni haven’t read the Sequences” part has actually been here since very near the beginning (not the initial 2012 minicamps, but the very first paid workshops of 2013 and later). (CFAR began in Jan 2012.) You can see it in our old end-of-2013 fundraiser post, where we wrote “Initial workshops worked only for those who had already read the LW Sequences. Today, workshop participants who are smart and analytical, but with no prior exposure to rationality—such as a local politician, a police officer, a Spanish teacher, and others—are by and large quite happy with the workshop and feel it is valuable.” We didn’t name this explicitly in that post, but part of the hope was to get the workshops to work for a slightly larger/broader/more cognitively diverse set than the set who for whom the original Sequences in their written form tended to spontaneously “click”.
As to the “aren’t part of this shared epistemic framework”—when I go to e.g. the alumni reunion, I do feel there are basic pieces of this framework at least that I can rely on. For example, even on contentious issues, 95%+ of alumni reunion participants seem to me to be pretty good at remembering that arguments should not be like soldiers, that beliefs are for true things, etc. -- there is to my eyes a very noticable positive difference between the folks at the alumni reunion and unselected-for-rationality smart STEM graduate students, say (though STEM graduate students are also notably more skilled than the general population at this, and though both groups fall short of perfection).
Still, I agree that it would be worthwhile to build more common knowledge and [whatever the “values” analog of common knowledge is called] supporting “a felt need to cache out their models of the world into explicit reasoning and communicable models that others can build on” and that are piecewise-checkable (rather than opaque masses of skills that are useful as a mass but hard to build across people and time). This particular piece of culture is harder to teach to folks who are seeking individual utility, because the most obvious payoffs are at the level of the group and of the long-term process rather than at the level of the individual (where the payoffs to e.g. goal-factoring and murphyjitsu are located). It also pays off more in later-stage fields and less in the earliest stages of science within preparadigm fields such as AI safety, where it’s often about shower thoughts and slowly following inarticulate hunches. But still.