Individual vs. Group Epistemic Rationality

It’s common practice in this community to differentiate forms of rationality along the axes of epistemic vs. instrumental, and individual vs. group, giving rise to four possible combinations. I think our shared goal, as indicated by the motto “rationalists win”, is ultimately to improve group instrumental rationality. Generally, improving each of these forms of rationality also tends to improve the others, but sometimes conflicts arise between them. In this post I point out one such conflict between individual epistemic rationality and group epistemic rationality.

We place a lot of emphases here on calibrating individual levels of confidence (i.e., subjective probabilities), and on the idea that rational individuals will tend to converge toward agreement about the proper level of confidence in any particular idea as they update upon available evidence. But I argue that from a group perspective, it’s sometimes better to have a spread of individual levels of confidence about the individually rational level. Perhaps paradoxically, disagreements among individuals can be good for the group.

A background fact that I start with is that almost every scientific ideas that humanity has ever come up with has been wrong. Some are obviously crazy and quickly discarded (e.g., every perpetual motion proposal), while others improve upon existing knowledge but are still subtly flawed (e.g., Newton’s theory of gravity). If we accept that taking multiple approaches simultaneously is useful for solving hard problems, then upon the introduction of any new idea that is not obviously crazy, effort should be divided between extending the usefulness of the idea by working out its applications, and finding/​fixing flaws in the underlying math, logic, and evidence.

Having a spread of confidence levels in the new idea helps to increase individual motivation to perform these tasks. If you’re overconfident in an idea, then you would tend to be more interested in working out its applications. Conversely, if you’re underconfident in it (i.e., are excessively skeptical), you would tend to work harder to try to find its flaws. Since scientific knowledge is a public good, individually rational levels of motivation to produce it are almost certainly too low from a social perspective, and so these individually irrational increases in motivation would tend to increase group rationality.

Even amongst altruists (at least human ones), excessive skepticism can be a virtue, due to the phenomenon of belief bias, in which “someone’s evaluation of the logical strength of an argument is biased by their belief in the truth or falsity of the conclusion”. In other words, given equal levels of motivation, you’re still more likely to spot a flaw in the arguments supporting an idea if you don’t believe in it. Consider a hypothetical idea, which a rational individual, after taking into account all available evidence and arguments, would assign a probability of .999 of being true. If it’s a particularly important idea, then on a group level it might still be worth devoting the time and effort of a number of individuals to try to detect any hidden flaws that may remain. But if all those individuals believe that the idea is almost certainly true, then their performance in this task would likely suffer compared to those who are (irrationally) more skeptical.

Note that I’m not arguing that our current “natural” spread of confidence levels is optimal in any sense. It may well be that the current spread is too wide even on a group level, and that we should work to reduce it, but I think it can’t be right for us to aim right away for an endpoint where everyone literally agrees on everything.