Evolved Bayesians will be biased

I have a small theory which strongly implies that getting less biased is likely to make “winning” more difficult.

Imagine some sort of evolving agents that follow vaguely Bayesianish logic. They don’t have infinite resources, so they use a lot of heuristics, not direct Bayes rule with priors based on Kolmogorov complexity. Still, they employ a procedure A to estimate what the world is like based on data available, and a procedure D to make decisions based on their estimations, both of vaguely Bayesian kind.

Let’s be kind to our agents and grant that for every possible data and every possible decision they might have encountered in their ancestral environment, they make exactly the same decision as an ideal Bayesian agent would. A and D have been fine-tuned to work perfectly together.

That doesn’t mean that either A or D are perfect even within this limited domain. Evolution wouldn’t care about that at all. Perhaps different biases within A cancel each other. For example an agent might overestimate snakes’ dangerousness and also overestimate his snake-dodging skills—resulting in exactly the right amount of fear of snakes.

Or perhaps a bias in A cancels another bias in D. For example an agent might overestimate his chance of success at influencing tribal policy, what neatly cancels his unreasonably high threshold for trying to do so.

And then our agents left their ancestral environment, and found out that for some of the new situations their decisions aren’t that great. They thought about it a lot, noticed how biased they are, and started a website on which they teach each other how to make their A more like perfect Bayesian’s A. They even got quite good at it.

Unfortunately they have no way of changing their D. So biases in their decisions which used to neatly counteract biases in their estimation of the world now make them commit a lot of mistakes even in situations where naive agents do perfectly well.

The problem is that for virtually every A and D pair that could have possibly evolved, no matter how good the pair is together, neither A nor D would be perfect in isolation. In all likelihood both A and D are ridiculously wrong, just in a special way that never hurts. Improving one without improving the other, or improving just part of either A or D, will lead to much worse decisions, even if your idea of what the world is like gets better.

I think humans might be a lot like that. As an artifact of evolution we make incorrect guesses about the world, and choices that would be incorrect given our guesses—just in a way that worked really well in ancestral environment, and works well enough most of the time even now. Depressive realism is a special case of this effect, but the problem is much more general.