You can taboo a word, or even a concept, but you can’t taboo a meaningful regularity, pretend that it’s not there.
I’m not proposing we pretend there’s no regularity to “types of thinking that help us form accurate beliefs, across domains”. Not at all. I’m proposing we stay attentive to the evidence as to what those types of thinking actually are and aren’t, by spelling out our full goal as much as possible. If we use the term “rationality” as a shorthand instead of spelling out that we’re after “types of thinking that actually help us form accurate beliefs”, it’s easy for the term “rationality” to become un-glued from the goal. So that “rationality” gets glued to “that thing we do in the rationality dojo” or to “whatever the Great Teacher said” or to “anything that sets me apart from others and lets me feel superior, like using long sentences and being socially awkward”, instead of being a term for those meaningful regularities we’re actually trying to study (the meaningful regularities in thinking methods that actually work).
The problem with belief-in-belief-in-rationality is the same as with other lost concepts, one of the essential lessons to learn, not something to shoo away.
Well, yes, I agree that a rationality dojo should talk about lost purposes, about the trouble with belief in belief in general, and about what exactly goes wrong when people speak overmuch of “rationality” instead of keeping their eyes on the prize. Is this supposed to be in tension with the suggestion that we, as a community, build a strong norm against talking overmuch of “rationality” and for, instead, speaking of “kinds of thinking that help us form accurate beliefs / achieve our goals”? I’m imagining that it’s precisely by having a really clear view of the standard “lost purposes” failure modes, and of their application to “rationality” learning, that we can maintain such a norm.
But for somereason we are talking about a specific failure mode, one that is not necessarily the single best case to demonstrate the general principles, and one that by itself is clearly insufficient. Investing disproportionally in this single case must have additional purposes.
I can see two goals:
Safeguarding the movement on early stages, where it’s easy to start in a wrong direction
Acting as a safety vent, compensating for the difficulty in certifying the sanity of the movement.
I’m not proposing we pretend there’s no regularity to “types of thinking that help us form accurate beliefs, across domains”. Not at all. I’m proposing we stay attentive to the evidence as to what those types of thinking actually are and aren’t, by spelling out our full goal as much as possible. If we use the term “rationality” as a shorthand instead of spelling out that we’re after “types of thinking that actually help us form accurate beliefs”, it’s easy for the term “rationality” to become un-glued from the goal. So that “rationality” gets glued to “that thing we do in the rationality dojo” or to “whatever the Great Teacher said” or to “anything that sets me apart from others and lets me feel superior, like using long sentences and being socially awkward”, instead of being a term for those meaningful regularities we’re actually trying to study (the meaningful regularities in thinking methods that actually work).
Well, yes, I agree that a rationality dojo should talk about lost purposes, about the trouble with belief in belief in general, and about what exactly goes wrong when people speak overmuch of “rationality” instead of keeping their eyes on the prize. Is this supposed to be in tension with the suggestion that we, as a community, build a strong norm against talking overmuch of “rationality” and for, instead, speaking of “kinds of thinking that help us form accurate beliefs / achieve our goals”? I’m imagining that it’s precisely by having a really clear view of the standard “lost purposes” failure modes, and of their application to “rationality” learning, that we can maintain such a norm.
But for some reason we are talking about a specific failure mode, one that is not necessarily the single best case to demonstrate the general principles, and one that by itself is clearly insufficient. Investing disproportionally in this single case must have additional purposes.
I can see two goals:
Safeguarding the movement on early stages, where it’s easy to start in a wrong direction
Acting as a safety vent, compensating for the difficulty in certifying the sanity of the movement.