So if we think about the epistemological issue space in terms of a Venn diagram we can imagine the following circles all of which intersect:
1. Ubiquitous (Outside: non-ubiquitous). Subject areas where prejudgement is ubiquitous are problematic because finding a qualified neutral arbitrator is difficult, nearly everyone is invested in the outcome.
2. Contested, either there is no consensus among authorities, the legitimacy of the authorities is in question or there are no relevant authorities. (Outside: uncontested). Obviously, not being able to appeal to authorities makes rational belief more difficult.
3. Invested (Outside: Non-invested). People have incentives for believing some things rather than others for reasons other than evidence. When people are invested in beliefs motivated skepticism is a common result.
3a. Entangled (untangled) In some cases people can be easily separated from the incentives that lead them to be invested in some belief (for example, when they have financial incentives. But sometime the incentives are so entangled with the agents and the proposition that they is no easy procedure that lets us remove the incentives.
3ai. Progressive (Traditional). Cases of entangled invested beliefs can roughly and vaguely be divided into those aligned with progress and those aligned with tradition.
So we have a diagram of three concentric circles (invested, entangled, progressive) bisected by a two circle diagram (ubiquitous, contested).
Now it seems clear that membership in every one of these sets makes an issue harder to think rationally, with one exception. How do beliefs aligned to progress differ structurally from beliefs aligned to tradition? What do we need to do differently for one over the other? Because we might as well address both at the same time if there is no difference.
That’s an excellent way of putting it, which brings a lot of clarity to my clumsy exposition! To answer your question, yes, the same essential mechanism I discussed is at work in both progressive and traditional biases—the desire that facts should provide convenient support for normative beliefs causes bias in factual beliefs, regardless of whether these normative beliefs are cherished as achievements of progress or revered as sacred tradition. However, I think there are important practical differences that merit some separate consideration.
The problem is that traditionalist vs. progressive biases don’t appear randomly. They are correlated with many other relevant human characteristics. In particular, my hypothesis is that people with formidable rational thinking skills—who, compared to other people, have much less difficulty with overcoming their biases once they’re pointed out and critically dissecting all sorts of unpleasant questions—tend to have a very good detector for biases and false beliefs of the traditionalist sort, but they find it harder to recognize and focus on those of the progressive sort.
What this means is that in practice, when exceptionally rational people see some group feeling good about their beliefs because these beliefs are a revered tradition, they’ll immediately smell likely biases and turn their critical eye on it. On the other hand, when they see people feeling good about their beliefs because they are a result of progress over past superstition and barbarism, they are in danger of assuming without justification that the necessary critical work has already been done, so everything is OK as it is. Also, in the latter sort of situation, they will relatively easily assume that the only existing controversy is between the rational progressive view and the remnants of the past superstition, although reality could be much more complex. This could even conceivably translate into support for the mainstream progressive view even if it has strayed into all sorts of biases and falsities.
So, basically, when we consider what biases and false beliefs could be hiding in things that are presently a matter of consensus, things that it just doesn’t even occur to anyone reputable to question, it seems to me that there is a greater chance of finding those that are hiding in your (3ai) category than in the rest of (3a). Thus, I would propose a heuristic that, I believe, has the potential to detect a lot of biases we are unaware of: just like you get suspicious as soon as you see people happy and content with their traditional beliefs, you should also get suspicious whenever you see a consensus that progress has been achieved on some issue, both normatively and factually, where however the factual part is not supported by strict hard-scientific evidence and there is a high degree of normative/factual entanglement.
What this means is that in practice, when exceptionally rational people see some group feeling good about their beliefs because these beliefs are a revered tradition, they’ll immediately smell likely biases and turn their critical eye on it. On the other hand, when they see people feeling good about their beliefs because they are a result of progress over past superstition and barbarism, they are in danger of assuming without justification that the necessary critical work has already been done, so everything is OK as it is.
This sounds like an interesting idea to me, and I hope it winds up in whatever fuller exposition of your ideas you end up posting.
So if we think about the epistemological issue space in terms of a Venn diagram we can imagine the following circles all of which intersect:
1. Ubiquitous (Outside: non-ubiquitous). Subject areas where prejudgement is ubiquitous are problematic because finding a qualified neutral arbitrator is difficult, nearly everyone is invested in the outcome.
2. Contested, either there is no consensus among authorities, the legitimacy of the authorities is in question or there are no relevant authorities. (Outside: uncontested). Obviously, not being able to appeal to authorities makes rational belief more difficult.
3. Invested (Outside: Non-invested). People have incentives for believing some things rather than others for reasons other than evidence. When people are invested in beliefs motivated skepticism is a common result.
3a. Entangled (untangled) In some cases people can be easily separated from the incentives that lead them to be invested in some belief (for example, when they have financial incentives. But sometime the incentives are so entangled with the agents and the proposition that they is no easy procedure that lets us remove the incentives.
3ai. Progressive (Traditional). Cases of entangled invested beliefs can roughly and vaguely be divided into those aligned with progress and those aligned with tradition.
So we have a diagram of three concentric circles (invested, entangled, progressive) bisected by a two circle diagram (ubiquitous, contested).
Now it seems clear that membership in every one of these sets makes an issue harder to think rationally, with one exception. How do beliefs aligned to progress differ structurally from beliefs aligned to tradition? What do we need to do differently for one over the other? Because we might as well address both at the same time if there is no difference.
That’s an excellent way of putting it, which brings a lot of clarity to my clumsy exposition! To answer your question, yes, the same essential mechanism I discussed is at work in both progressive and traditional biases—the desire that facts should provide convenient support for normative beliefs causes bias in factual beliefs, regardless of whether these normative beliefs are cherished as achievements of progress or revered as sacred tradition. However, I think there are important practical differences that merit some separate consideration.
The problem is that traditionalist vs. progressive biases don’t appear randomly. They are correlated with many other relevant human characteristics. In particular, my hypothesis is that people with formidable rational thinking skills—who, compared to other people, have much less difficulty with overcoming their biases once they’re pointed out and critically dissecting all sorts of unpleasant questions—tend to have a very good detector for biases and false beliefs of the traditionalist sort, but they find it harder to recognize and focus on those of the progressive sort.
What this means is that in practice, when exceptionally rational people see some group feeling good about their beliefs because these beliefs are a revered tradition, they’ll immediately smell likely biases and turn their critical eye on it. On the other hand, when they see people feeling good about their beliefs because they are a result of progress over past superstition and barbarism, they are in danger of assuming without justification that the necessary critical work has already been done, so everything is OK as it is. Also, in the latter sort of situation, they will relatively easily assume that the only existing controversy is between the rational progressive view and the remnants of the past superstition, although reality could be much more complex. This could even conceivably translate into support for the mainstream progressive view even if it has strayed into all sorts of biases and falsities.
So, basically, when we consider what biases and false beliefs could be hiding in things that are presently a matter of consensus, things that it just doesn’t even occur to anyone reputable to question, it seems to me that there is a greater chance of finding those that are hiding in your (3ai) category than in the rest of (3a). Thus, I would propose a heuristic that, I believe, has the potential to detect a lot of biases we are unaware of: just like you get suspicious as soon as you see people happy and content with their traditional beliefs, you should also get suspicious whenever you see a consensus that progress has been achieved on some issue, both normatively and factually, where however the factual part is not supported by strict hard-scientific evidence and there is a high degree of normative/factual entanglement.
This sounds like an interesting idea to me, and I hope it winds up in whatever fuller exposition of your ideas you end up posting.