Practical debiasing

Some of this post is an expansion of topics covered by Lukeprog here

1. Knowing about biases (doesn’t stop you being biased)

Imagine you had to teach a course that would help people to become less biased. What would you teach? A natural idea, tempting enough in theory, might be that you should teach the students about all of the biases that influence their decision making. Once someone knows that they suffer from overconfidence in their ability to predict future events, surely they will adjust their confidence accordingly.

Readers of Less Wrong will be aware that it’s more complicated than that.

There is a mass of research showing that knowing about cognitive biases does not stop someone from being biased. Quattrone et. al. (1981) showed that anchoring effects are not decreased by instructing subjects to avoid the bias. Similarly, Pohl et. al. (1996) demonstrate that the same applies to the hindsight bias. Finally, Arzy et al (2009) showed that including a misleading detail in a description of a medical case significantly decreased diagnostic accuracy. Accuracy does not improve if doctors are warned that such information may be present.

2. Consider the opposite (but not too much)

So what does lead to debiasing? As Lukeprog mentioned one well supported tactic is that of “consider the opposite”, which involves simply considering some reasons that an initial judgment might be incorrect. This has been shown to help counter overconfidence and hindsight bias as well as anchoring. See, for example, Arkes (1991) or Mussweiler et. al. (2000) for studies along this line.

There are two more things worth noting about this tactic. The first is that Soll and Klayman (2004) have demonstrated that a related tactic has positive results in relation to overconfidence. In their experiment, Soll and Klayman asked subjects to give an interval such that they are 80% sure that the answer to a question lay within this interval. So they asked for predictions of things like the birth year of Oliver Cromwell and the subjects would need to provide an early year and a late year such that they were 80% sure that Cromwell was born somewhere between there two years. These subjects exhibited substantial overconfidence—they were right far less than 80% of the time.

However, another group of subjects were asked two questions. For the first, they were asked to pick a year such that they were 90% sure Cromwell wasn’t born before this year. For the second, they were asked to pick a year such that they were 90% sure that Cromwell wasn’t born after this year. Subjects still displayed overconfidence in response to this question but to a far more minor extent. But the two questions are equivalent (eta: though see this comment)! Being forced to consider arguments for both ends of the interval seemed to lead to more accurate prediction. Further studies have attempted to improve on this result through more sophisticated tactics along the same lines (see, for example, Andrew Speirs-Bridge et. al., 2009)

The second thing worth noting is that considering too many reasons that an initial judgement might be incorrect is counterproductive (see Roese, 2004 or Sanna et. al. 2002). After a certain point, it becomes increasingly difficult for a person to generate reasons they might have been incorrect. This then serves to convince them that their idea must be right, otherwise it would be easier to come up with reasons against the claim. At this point, the technique ceases to have a debiasing effect. While the exact number of reasons that one should consider is likely to differ from case to case, Sanna et. al. (2002) found a debiasing effect when subjects were asked to consider 2 reasons against their initial conclusion but not when they were asked to consider 10. Consequently, it seems plausible that the ideal number of arguments to consider will be closer to 2 than 10.

So consider the opposite but not too much.


3. Provide reasons

There is also evidence that providing reasons for your decision or judgement can help to mitigate biases. Arkes et. al. (1988) demonstrated that, in relation to hindsight bias, asking for a rationale for a judgement can help debias that judgement.

Similar research has been demonstrated in relation to framing effects. Miller and Fagley (1991) presented participants with a series of scenarios about how to respond to a disease outbreak. One group was then presented with a positive frame while one was presented with a negative frame. This framing influenced the program of response that the participants selected. In other words, those in the negative frame group selected responses with a different frequency to those in the positive frame group despite the scenario being the same. However, if the groups were asked to provide a reason for their decision, then both groups selected responses at about the same frequency (However, Sieck and Yates (1997) demonstrated that this approach does not work in relation to all types of framing questions).

So provide reasons for your decisions.

4. Get some training

There is also evidence that some biases can be trained away. Specifically, Larrick et. al. (1990) has shown that the sunk cost fallacy can be avoided by training and Fong et. al. (1986) has presented similar research with regards to judgements about sample variability.

Larrick (2004) claims that this training is most effective when an abstract principle is taught along with concrete examples. He also suggested that the training should involve examples showing how the principle works in context. The process of training involves not just learning the rule but also figuring out when to apply it and then (hopefully) coming to apply it automatically.

This seems like the sort of thing that could potentially be run in the discussion section of Less Wrong or at face to face meetups.

5. Reference class forecasting

The final technique I want to discuss is reference class forecasting which has been discussed by both Robin and Eliezer. On Less Wrong, this topic is often discussed in terms of the inside and the outside view. Reference class forecasting is basically the idea that in predicting how long a project should take, one should not try to figure out how long each component of the project will take but should instead ask how long it has taken you (or others) to complete similar tasks in the past.

This approach has been shown to be effective in overcoming the planning fallacy. For example, Osberg and Shrauger (1986) demonstrated that those instructed to consider their performance in similar cases in the past were better able to predict their performance in new projects.

So in predicting how long a task will take, use the outside not the inside view.

6. Concluding remarks

I’m sure there’s nothing here that will surprise most Less Wrong readers but I hope that having it all together in one place is useful. For anyone who’s interested, I got a lot of the information for this post from Richard P. Larrick’s article, ‘Debiasing’ in the Blackwell Handbook of Judgment and Decision Making which is a good book all round.

References

Arkes, H.R. 1991, ‘Costs and benefits of judgement errors: Implications for debiasing’, Psychological Bulletin, vol. 110, no. 3, pp. 486-498

Arkes, H.R., Faust, D., Guilmette, T.J., & Hart, K. 1988, ‘Eliminating the Hindsight Bias’, Journal of Applied Psychology, vol. 73, pp. 305-307

Fong, G. T., Krantz, D. H., & Nisbett, R. E. 1986, ‘The effects of statistical training on thinking about everyday problems.’, Cognitive Psychology, 18, 253-292.

Larrick, R.P. 2004, ‘Debiasing’, in Blackwell Handbook of Judgment and Decision Making, Blackwell Publishing, Oxford, pp. 316-337.

Miller, P.M. & Fagley, N.S. 1991, ‘The Effects of Framing, Problem Variations, and Providing Rationale on Choice’, Personality and Social Psychology Bulletin, vol. 17, no. 5, pp. 517-522.

Mussweiler, T. Strack, F. & Pfeiffer, T. 2000, ‘Overcoming the Inevitable Anchoring Effect: Considering the Opposite Compensates for Selective Accessibility’, Personality and Social Psychology Bulletin, vol. 26, no. 9, pp. 1142-1150

Osberg, T. M., & Shrauger, J. S. 1986, ‘Self-prediction: Exploring the parameters of accuracy’, Journal of Personality
and Social Psychology
, vol. 51,no. 5, pp. 1044-1057.

Pohl, R.F. & Hell, W. 1996, ‘No reduction in Hindsight Bias after Complete Information and repeated Testing’, Organizational Behaviour and Human Decision Processes, vol. 67, no. 1, pp. 49-58.

Quattrone, G.A. Lawrence, C.P. Finkel, S.E. & Andrus, D.C. 1981, Explorations in anchoring: The effects of prior range, anchor extremity, and suggestive hints. Manuscript, Stanford University.

Roese, N.J. 2004, ‘Twisted Pair: Counterfactual Thinking and the Hindsight Bias’, in Blackwell Handbook of Judgment and Decision Making, Blackwell Publishing, Oxford, pp. 258-273.

Sanna, L.J., Schwarz, N., Stocker, S.L. 2002, ‘When Debiasing Backfires: Accessible Content and Accessibility Experiences in Debiasing Hindsight’, Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 28, no. 3, pp. 497-502.

Soll, J.B. & Klayman, J. 2004, ‘Overconfidence in Interval Estimates’, Journal of Experimental Psychology: Learning, Memory, and Cognition, vol. 30, no. 2, pp. 299-314

Speirs-Bridge, A., Fidler, F., McBride, M., Flander, L., Cumming, G. & Burgman, M. 2009, ‘Reducing overconfidence in the interval judgements of experts’, Risk Analysis, vol. 30, no. 3, pp. 512 – 523