Internal Availability

Edit: Following mixed reception, I decided to split this part out of the latest post in my sequence on reinforcement learning. It wasn’t clear enough, and anyway didn’t belong there.

I’m posting this hopefully better version to Discussion, and welcome further comments on content and style.


The availability heuristic seems to be a mechanism inside our brains which uses the ease with which images, events and concepts come to mind as evidence for their prevalence or probability of occurrence. For this heuristic to be worth the trouble it causes, there needs to be a counterpart, a second mechanism which actually makes things available to the first one in correlation with their likelihood. In this post I discuss why having such an internal availability mechanism can be a good idea, and outline some of the ways it can fail.


You’re playing Texas Hold’em poker against another player, and she has just bet all her chips on the flop (the 2nd of 4 betting rounds, when there are 2 more shared cards to draw). You estimate that with high probability she has a low pair (say, under 9) with a high kicker (A or K, hoping to hit a second pair). You hold Q-J off-suit. Do you call?

One question this depends on is: what’s the probability p that you will win this hand? An experienced player will know that your best hope is to hit a pair, without the other player hitting anything better than her low pair. This has probability of slightly less than 25%.

We could compute or remember a better estimate if we notice the probability of runner-runner outs, but is it worth it? It won’t help us pin down p with amazing accuracy—we could be wrong about the opponent’s hand to begin with. And anyway, the decision of whether to actually call depends on many other factors: the sizes of your stack, her stack, the blinds and so on. A 1% error in the estimate of the win probability is unlikely to change your decision.

So instead of pointlessly trying to predict the future to an impossible and useless degree of accuracy, what we did was tell ourselves a bunch of likely stories about what might happen, then combine these scenarios into a simple probabilistic prediction of the future, and plan as best we can given this prediction.

This may be the mechanism that makes the availability heuristic a smart choice. The main observed effect of this heuristic is that past (subjective) prevalence seems to be highly linked to future predictions. Patches of stories we’ve heard may even work their way into the stories we tell of the future. The missing link is an internal availability mechanism which chooses which patches to make available for retelling. We seem to use such a mechanism to identify likely outcomes; before we forward them to the more commonly discussed process which integrates these stories of the future into a usable prediction.

What events would be good candidates for becoming available? One thing to notice is that evaluation of the expected value of our actions depends both on the probability and on the impact of their results; but for each specific future we don’t need both these numbers, only their product. If the main function of the internal availability mechanism is to predict value, rather than probability, it stands to reason that high-impact but improbable outcomes will become as available as mundane probable ones. Yes, concepts which were encountered most often in the past, in a context similar to the current one, come to mind easily. But one-in-a-hundred or -thousand outcomes should also become available if they are very important. One-in-a-million ones, on the other hand, are almost never worth the trouble.

If something similar is indeed going on in our brains, then it seems to be working pretty well, usually. When I walk down the street, I give no mind to the possibility that there are invisible obstacles in my way. It is so remote, that even if I took it into account with adequately small probability, my actions would probably be roughly the same. It is therefore wise not to occupy my precious processing power with such nonsense.

Even when the internal availability mechanism is working properly, it generates unavoidable errors in prediction. Strictly speaking, ignoring some unlikely and unimportant possibilities is wrong, however practical. And while it makes noticing things evidence for their higher probability, this heuristic could sometimes fail, particularly if the internal availability mechanism is built for utility but used for probability.

The mechanism itself can also fail. Availability doesn’t seem to be binary, so one type of failure is to make certain scenarios over- or under-available, marking them as more or less likely and important than they are. There also appears to be some threshold, some minimal value for non-zero availability. Another type of failure is when an important outcome fails to meet this threshold, not becoming available.

Or perhaps an unlikely future becomes available even though it shouldn’t. This may explain why people are unable to estimate low probabilities. In their mind, the prospect of winning the lottery and becoming millionaires creates a vivid image of an exciting future. It’s so immersive, that it really appears to be a real possibility—it could actually happen!