Adaptive bias

If we want to apply our brains more effectively to the pursuit of our chosen objectives, we must commit to the hard work of understanding how brains implement cognition. Is it enough to strive to “overcome bias”? I’ve come across an interesting tidbit of research (which I’ll introduce in a moment) on “perceptual pop-out”, that hints it is not enough.

“Cognition” is a broad notion; we can dissect it into awareness, perception, reasoning, judgment, feeling… Broad enough to encompass what I’m coming to call “pure reason”: our shared toolkit of normative frameworks for assessing probability, evaluating utility, guiding decision, and so on. Pure reason is one of the components of rationality, as this term is used here, but it does not encompass all of rationality, and we should beware the many Myths of Pure Reason. The Spock caricature is one; by itself enough cause to use the word “rational” sparingly, if at all.

Or the idea that all bias is bad.

It turns out, for instance, that a familiar bugaboo, confirmation bias, might play an important role in perception. Matt Davis at Cambridge Medical School has crafted a really neat three-part audio sample based showcasing one of his research topics. The first and last part of the sample are exactly the same. If you are at all like me, however, you will perceive them quite differently.

Here is the audio sample (mp3). Please listen to it now.

Notice the difference? Matt Davis, who has researched these effects extensively, refers to them as “perceptual pop-out”. The link with confirmation bias is suggested by Jim Carnicelli: ” Once you have an expectation of what to look for in the data, you quickly find it.”

In Probability Theory, E.T. Jaynes notes that perception is “inference from incomplete information”; and elsewhere adds:

Kahneman & Tversky claimed that we are not Bayesians, because in psychological tests people often commit violations of Bayesian principles. [...] People are reasoning to a more sophisticated version of Bayesian inference than [Kahneman and Tversky] had in mind. [...] We would expect Natural Selection to produce such a result: after all, any reasoning format whose results conflict with Bayesian inference will place a creature at a decided survival disadvantage.

There is an apparent paradox between our susceptibility to various biases, and the fact that these biases are prevalent precisely because they are part of a cognitive toolkit honed over a long evolutionary period, suggesting that each component of that toolkit must have worked—conferred some advantage. Bayesian inference, claims Jaynes, isn’t just a good move—it is the best move.

However, these components evolved in specific situations; the hardware kit that they are part of was never intended to run the software we now know as “pure reason”. Our high-level reasoning processes are “hijacking” these components for other purposes. The same goes of our consciousness, which is also a patched-together hack on top of the same hardware.

There, by the way, is why Dennett’s work on consciousness is important, and should be given a sympathetic exposition here rather than a hatchet job. (This post is intended in part as a tentative prelude to tackling that exposition.)

We are not AIs, who, when finally implemented, will (putatively) be able to modify their own source code. The closest we can come to that is to be aware of what our reasoning is put together from, which include various biases that exist for a reason, and to make conscious choices as to how we use these components.

Bottom line : understanding where your biases come from, and putting that knowledge to good use, is of more value than rejecting all bias as evil.