Harnessing Your Biases

Theoretically, my ‘truth’ function, the amount of evidence I need to cache something as ‘probably true and reliable’ should be a constant. I find, however, that it isn’t. I read a large amount of scientific literature every day, and only have time to investigate a scant amount of it in practice. So, typically I rely upon science reporting that I’ve found to be accurate in the past, and only investigate the few things that have direct relevance to work I am doing (or may end up doing).

Today I noticed something about my habits. I saw an article on how string theory was making testable predictions in the realm of condensed matter physics, and specifically about room-temperature superconductors. While a pet interest of mine, this is not an area that I’m ever likely to be working in, but the article seemed sound and so I decided it was an interesting fact, and moved on, not even realizing that I had cached it as probably true.

A few minutes later it occurred to me that some of my friends might also be interested in the article. I have a Google RSS feed that I use to republish occasional articles that I think are worth reading. I have a known readership of all of 2. Suddenly, I discovered that what I had been willing to accept as ‘probably true’ on my own behalf was no longer good enough. Now I wanted to look at the original paper itself, and to see if I could find any learnéd refutations or comments.

This seems to be because my reputation was now, however tangentially, “on the line” since I have a reputation in my circle of friends as the science geek and would not want to damage it by steering someone wrong. Now, clearly this is wrong headed. My theory of truth should be my theory of truth, period.

One could argue, I suppose, that information that I store internally can only affect my own behavior while information that I disseminate can affect the behaviour of an arbitrarily large group of people, and so a more stringent standard should apply to things I tell others. In fact that was the first justification that sprang to mind when I noticed my double standard.

Its a bogus argument though, as none of my friends are likely to repeat the article or post it in their blogs and so the dissemination has only a tiny probability of propagating by that route. However, once its in my head and I’m treating it as true, I’m very likely to trot it out as an interesting fact when I’m talking at Science Fiction conventions or to groups of interested geeks. If anything, the standard for my believing something should be more stringent than my standard for repeating it, not the other way around.

But, the title of this post is “Harnessing Your Biases” and it seems to me that if I am going to have this strange predisposition to check more carefully if I am going to publish something, then maybe I need to set up a blog of things I have read that I think are true. It can just be an edited feed of my RSS stream, since this is simple to put together. Then I may find myself being more careful in what I accept as true. The mere fact that I have the feed and that its public (although I doubt that anyone would, in fact, read it), would make me more careful. Its even possible that it will contain very few articles as I would find I don’t have time to investigate interesting claims well enough to declare them true, but this will have the positive side effect that I won’t go around caching them internally as true either.

I think that, in many ways, this is why, in the software field, code reviews are universally touted as an extraordinarily cheap and efficient way of improving code design and documentation while decreasing bugs, and yet is very hard to get put into practice. The idea is that after you’ve written any piece of code, you give it to a coworker to critique before you put it in the code base. If they find too many things to complain about, it goes back for revision before being given to yet another coworker to check. This continues until its deemed acceptable.

In practice, the quality of work goes way up and the speed of raw production goes down marginally. The end result is code that needs far less debugging and so the number of working lines of code produced per day goes way up. I think this is because programmers in such a regime quickly find that the testing and documenting that they think is ‘good enough’ when their work is not going to be immediately reviewed is far less than the testing and documenting they do when they know they have to hand it to a coworker to criticize. The downside, of course, is that they are now opening themselves up for criticism on a daily basis, and this is something that few folks enjoy no matter how good it is for them, and so the practice continues to be quite rare due to programmer resistance to the idea.

This appears to be two different ways in which to harness the bias that folks have to do better (or more careful) work when it is going to be examined, to achieve better results. Can anyone else here think of other biases that can be exploited in useful ways to leverage greater productivity or reliability in projects?