It is true that there are reasons for our biases; that human behavior was shaped by evolution and optimized for the natural environment. Many mistakes that we do are a result of behavior that contributes to survival in nature.
But I think that “contributes to survival” does not always lead to “solid inference rules”. For example, imagine that a majority of the tribe is wrong about some factual question. (A question where being right or wrong is not immediately relevant for surviving.) It contributes to survival if an individual joins this majority, because it gets them allies. -- This could be excused by saying that in an ancient tribe without much specialization, a majority is more likely to be correct than an individual, therefore “follow the majority opinion” actualy is a good truth-finding heuristics. But that ignores the fact that people sometimes lie for a purpose, e.g. calumniate their opponents, or fabricate religious experience. So there is more in joining the majority than merely a decent truth-finding heuristics.
(EDIT: It’s not like in the past humans lived in harmony with nature using their heuristics, and only today we have exploitable biases. People had exploitable biases even in the ancient environment—their heuristics were correct often, but not always—and people have exploited each other’s biases even in the ancient environment. Not only we had adaptations to make mostly correct decisions, but also adaptations to exploit other people’s flaws in the former adaptations.)
Also, no species is perfectly tuned to their environment. Some useful mutations simply didn’t happen yet. Also, there are various trade-offs, so even if a species as a whole is optimized for given environment, some of their individual features may be suboptimal, as a price to improve other conflicting features. Therefore, assuming that every human bias is a result of a perfect behavior in the natural environment, would be assuming too much.
I have to admit that the text is a bit long! We sorta did say all of that you are saying, which means that the way I resumed the text here was a bit misleading.
There must be conditions when a heuristic like “follow the majority opinion” must be triggered in our heads: something is recognized maybe. There is selection pressure to find social exchange violation, but also to be ingenious in persuasion. Some of this already has experimental support. Anyway, we think that what we today call fallacies are not accidents—like the blind spot. They are good inference rules for a relatively stable environment, but cannot predict far into the future and cannot judge new complex problems. That may be why we don’t spot the fallacies of small talk, of experts in domains with expertise, or in domains for which we already have intuitions.
That would imply that a bad decision today is not necessarily the product of a cognitive illusion, but that we build a bad interface for the actual human mind in the modern world (a car will be lighter and faster if it shouldn’t accommodate humans). Reference class forecasting or presenting probabilities as frequencies are just technologies, interfaces. The science is about the function and the fallacies are interesting precisely because, presumably, they are a repetitive behavior. They may help in our effort to reverse engineer ourselves.
It is true that there are reasons for our biases; that human behavior was shaped by evolution and optimized for the natural environment. Many mistakes that we do are a result of behavior that contributes to survival in nature.
But I think that “contributes to survival” does not always lead to “solid inference rules”. For example, imagine that a majority of the tribe is wrong about some factual question. (A question where being right or wrong is not immediately relevant for surviving.) It contributes to survival if an individual joins this majority, because it gets them allies. -- This could be excused by saying that in an ancient tribe without much specialization, a majority is more likely to be correct than an individual, therefore “follow the majority opinion” actualy is a good truth-finding heuristics. But that ignores the fact that people sometimes lie for a purpose, e.g. calumniate their opponents, or fabricate religious experience. So there is more in joining the majority than merely a decent truth-finding heuristics.
(EDIT: It’s not like in the past humans lived in harmony with nature using their heuristics, and only today we have exploitable biases. People had exploitable biases even in the ancient environment—their heuristics were correct often, but not always—and people have exploited each other’s biases even in the ancient environment. Not only we had adaptations to make mostly correct decisions, but also adaptations to exploit other people’s flaws in the former adaptations.)
Also, no species is perfectly tuned to their environment. Some useful mutations simply didn’t happen yet. Also, there are various trade-offs, so even if a species as a whole is optimized for given environment, some of their individual features may be suboptimal, as a price to improve other conflicting features. Therefore, assuming that every human bias is a result of a perfect behavior in the natural environment, would be assuming too much.
But otherwise, I like this.
I have to admit that the text is a bit long! We sorta did say all of that you are saying, which means that the way I resumed the text here was a bit misleading.
There must be conditions when a heuristic like “follow the majority opinion” must be triggered in our heads: something is recognized maybe. There is selection pressure to find social exchange violation, but also to be ingenious in persuasion. Some of this already has experimental support. Anyway, we think that what we today call fallacies are not accidents—like the blind spot. They are good inference rules for a relatively stable environment, but cannot predict far into the future and cannot judge new complex problems. That may be why we don’t spot the fallacies of small talk, of experts in domains with expertise, or in domains for which we already have intuitions.
That would imply that a bad decision today is not necessarily the product of a cognitive illusion, but that we build a bad interface for the actual human mind in the modern world (a car will be lighter and faster if it shouldn’t accommodate humans). Reference class forecasting or presenting probabilities as frequencies are just technologies, interfaces. The science is about the function and the fallacies are interesting precisely because, presumably, they are a repetitive behavior. They may help in our effort to reverse engineer ourselves.