In my opinion, “a rationalism” (IE, a set of memes designed for an intellectual community focused on the topic of clear thinking itself) requires a few components to work.
It requires a story in which more is possible (as compared with “how you would reason otherwise” or “how other people reason” or such).
The first component is an overarching theory of reasoning. This is a framework in which we can understand what reason is, analyze good and bad reasoning, and offer advice about reasoning.
The second component is an account of how this is not the default. If there is a simple notion of good reasoning, but also everyone is already quite good at that, then there is not a strong motivation to learn about it, practice it, or form a community around it.
The sequences told a story in which the first role was mostly played by a form of Bayesianism, and the second was mostly played by the heuristics and biases literature. The LessWrong memeplex has evolved somewhat over time, including forming some distinct subdivisions with slightly different answers to those two questions.
Most notably, I think CFAR has changed its ideas about these two components quite a bit. One version I heard once: the sequences might give you the impression that people are overall pretty bad at Bayesian reasoning, and the best way to become more rational is to specifically de-bias yourself by training Bayesian reasoning and un-training all the known biases or coming up with ways to compensate for them. Initially, this was the vision of CFAR as well. But what CFAR found was that humans are actually really really good at Bayesian reasoning, when other psychological factors are not getting in the way. So CFAR pivoted to a model more focused on removing blockers rather than increasing basic reasoning skills.
Note that this is a different answer to the second question, but keeps Bayesianism as the overarching theory of rationality. (Also keep in mind that this is, quite probably, a pretty bad summary of how views have changed since the beginning of CFAR.)
Eliezer has now written Inadequate Equilibria, which offers a significantly different version of the second component. I could understand starting there and getting an impression of what’s important about rationalism which is quite distant from Bayesianism: there, the primary story about rationality is social blockers, to which the primary antidote is thinking for yourself rather than going with the crowd. Why is Bayesianism important for that? Well, the answer is that Bayesianism offers a nuts-and-bolts theory of how to think. You need some such theory in order to ground attempts at self-improvement (otherwise you run the risk of making haphazard changes without any standard by which to judge whether you are thinking better/worse). But the quality of the theory has a significant bearing on how well the self-improvement will turn out!
I think it’s important that the overarching reasoning was some form of probabilism for “obvious” reasons I won’t go into.
I think it was important that it was Bayesianism in particular for a few reasons, some better than others.
Bayesianism allows probabilism to be applied in the most broad way. Frequentist and propensity interpretations of probability both hold that it’s inappropriate to apply probabilistic judgement in hypothesis testing. This makes it much more difficult to apply lessons from probabilistic reasoning, since you’re being restricted in where to apply them. (Of course, if that restriction were appropriate then it would be better to avoid applying the lessons of probability...)
Although vanilla Bayesianism is subjectivist about the prior, it offers a completely objective story about how reasoning should go once we’ve fixed the prior. I recently argued against this aspect of classical Bayesianism. However, I can see how this was an advantage in terms of memetics—a totally objective story for this part makes for strong dividing lines between correct and incorrect reasoning.
The addition of algorithmic information theory also offers a “more objective” story about the prior.
As I have recently argued, classical Bayesianism ends up sidelining some important “frequentist” properties, which we should also want. So, to an extent, my current perspective is a hybrid of Bayesianism and frequentism. But given a choice between the two, it seems much better that I started out Bayesian and had to figure out how to integrate frequentist ideas, rather than the other way around.
But what CFAR found was that humans are actually really really good at Bayesian reasoning, when other psychological factors are not getting in the way.
What’s the source for that claim? Is that a public position of CFAR?
No. It is something somebody said to me once. (Possibly in a context where I’m supposed to anonymize the source—I don’t remember for sure.) I’m sure there are a lot of other complexities to CFAR’s history, and a lot of different summaries one could give. And maybe this particular summary is actually really bad for some reason I’m not aware of.
In my opinion, “a rationalism” (IE, a set of memes designed for an intellectual community focused on the topic of clear thinking itself) requires a few components to work.
It requires a story in which more is possible (as compared with “how you would reason otherwise” or “how other people reason” or such).
The first component is an overarching theory of reasoning. This is a framework in which we can understand what reason is, analyze good and bad reasoning, and offer advice about reasoning.
The second component is an account of how this is not the default. If there is a simple notion of good reasoning, but also everyone is already quite good at that, then there is not a strong motivation to learn about it, practice it, or form a community around it.
The sequences told a story in which the first role was mostly played by a form of Bayesianism, and the second was mostly played by the heuristics and biases literature. The LessWrong memeplex has evolved somewhat over time, including forming some distinct subdivisions with slightly different answers to those two questions.
Most notably, I think CFAR has changed its ideas about these two components quite a bit. One version I heard once: the sequences might give you the impression that people are overall pretty bad at Bayesian reasoning, and the best way to become more rational is to specifically de-bias yourself by training Bayesian reasoning and un-training all the known biases or coming up with ways to compensate for them. Initially, this was the vision of CFAR as well. But what CFAR found was that humans are actually really really good at Bayesian reasoning, when other psychological factors are not getting in the way. So CFAR pivoted to a model more focused on removing blockers rather than increasing basic reasoning skills.
Note that this is a different answer to the second question, but keeps Bayesianism as the overarching theory of rationality. (Also keep in mind that this is, quite probably, a pretty bad summary of how views have changed since the beginning of CFAR.)
Eliezer has now written Inadequate Equilibria, which offers a significantly different version of the second component. I could understand starting there and getting an impression of what’s important about rationalism which is quite distant from Bayesianism: there, the primary story about rationality is social blockers, to which the primary antidote is thinking for yourself rather than going with the crowd. Why is Bayesianism important for that? Well, the answer is that Bayesianism offers a nuts-and-bolts theory of how to think. You need some such theory in order to ground attempts at self-improvement (otherwise you run the risk of making haphazard changes without any standard by which to judge whether you are thinking better/worse). But the quality of the theory has a significant bearing on how well the self-improvement will turn out!
I think it’s important that the overarching reasoning was some form of probabilism for “obvious” reasons I won’t go into.
I think it was important that it was Bayesianism in particular for a few reasons, some better than others.
Bayesianism allows probabilism to be applied in the most broad way. Frequentist and propensity interpretations of probability both hold that it’s inappropriate to apply probabilistic judgement in hypothesis testing. This makes it much more difficult to apply lessons from probabilistic reasoning, since you’re being restricted in where to apply them. (Of course, if that restriction were appropriate then it would be better to avoid applying the lessons of probability...)
Although vanilla Bayesianism is subjectivist about the prior, it offers a completely objective story about how reasoning should go once we’ve fixed the prior. I recently argued against this aspect of classical Bayesianism. However, I can see how this was an advantage in terms of memetics—a totally objective story for this part makes for strong dividing lines between correct and incorrect reasoning.
The addition of algorithmic information theory also offers a “more objective” story about the prior.
As I have recently argued, classical Bayesianism ends up sidelining some important “frequentist” properties, which we should also want. So, to an extent, my current perspective is a hybrid of Bayesianism and frequentism. But given a choice between the two, it seems much better that I started out Bayesian and had to figure out how to integrate frequentist ideas, rather than the other way around.
What’s the source for that claim? Is that a public position of CFAR?
No. It is something somebody said to me once. (Possibly in a context where I’m supposed to anonymize the source—I don’t remember for sure.) I’m sure there are a lot of other complexities to CFAR’s history, and a lot of different summaries one could give. And maybe this particular summary is actually really bad for some reason I’m not aware of.