Debiasing as Non-Self-Destruction

Nick Bostrom asks:

One sign that science is not all bogus is that it enables us to do things, like go the moon. What practical things does debiassing enable us to do, other than refraining from buying lottery tickets?

It seems to me that how to be smart varies widely between professions. A hedge-fund trader, a research biologist, and a corporate CEO must learn different skill sets in order to be actively excellent—an apprenticeship in one would not serve for the other.

Yet such concepts as “be willing to admit you lost”, or “policy debates should not appear one-sided”, or “plan to overcome your flaws instead of just confessing them”, seem like they could apply to many professions. And all this advice is not so much about how to be extraordinarily clever, as, rather, how to not be stupid. Each profession has its own way to be clever, but their ways of not being stupid have much more in common. And while victors may prefer to attribute victory to their own virtue, my small knowledge of history suggests that far more battles have been lost by stupidity than won by genius.

Debiasing is mostly not about how to be extraordinarily clever, but about how to not be stupid. Its great successes are disasters that do not materialize, defeats that never happen, mistakes that no one sees because they are not made. Often you can’t even be sure that something would have gone wrong if you had not tried to debias yourself. You don’t always see the bullet that doesn’t hit you.

The great victories of debiasing are exactly the lottery tickets we didn’t buy—the hopes and dreams we kept in the real world, instead of diverting them into infinitesimal probabilities. The triumphs of debiasing are cults not joined; optimistic assumptions rejected during planning; time not wasted on blind alleys. It is the art of non-self-destruction.

Admittedly, none of this is spectacular enough to make the evening news. It’s not a moon landing—though the moon landing did surely require thousands of things to not go wrong.

So how can we know that our debiasing efforts are genuinely useful? Well, this is the worst sort of anecdotal evidence—but people do sometimes ignore my advice, and then, sometimes, catastrophe ensues of just the sort I told them to expect. That is a very weak kind of confirmation, and I would like to see controlled studies… but most of the studies I’ve read consist of taking a few undergraduates who are in it for the course credit, merely telling them about the bias, and then waiting to see if they improve. What we need is longitudinal studies of life outcomes, and I can think of few people I would name as candidates for the experimental group.

The fact is, most people who take a halfhearted potshot at debiasing themselves do not get huge amounts of mileage out of it. This is one of those things you have to work at for quite a while before you get good at it, especially since there’s currently no source of systematic training, or even a decent manual. If for many years you practice the techniques and submit yourself to strict constraints, it may be that you will glimpse the center. But until then, mistakes avoided are often just replaced by other mistakes. It takes time for your mind to become significantly quieter. Indeed, a little knowledge of cognitive bias often does more harm than good.

As for public proof, I can see at least three ways that it could come about. First, there might be founded an Order of Bayescraft for people who are serious about it, and the graduates of these dojos might prove systematically more successful even after controlling for measures of fluid intelligence. Second, you could wait for some individual or group, working on an important domain-specific problem but also known for their commitment to debiasing, to produce a spectacularly huge public success. Third, there might be found techniques that can be taught easily and that have readily measureable results; and then simple controlled experiments could serve as public proof, at least for people who attend to Science.