You can’t believe in Bayes

Well, you can. It’s just oxymoronic, or at least ironic. Because belief is contrary to the Bayesian paradigm.

You use Bayesian methods to choose an action. You have a set of observations, and assign probabilities to possible outcomes, and choose an action.

Belief in an outcome N means that you set p(N) ≈ 1 if p(N) > some threshold. It’s a useful computational shortcut. But when you use it, you’re not treating N in a Bayesian manner. When you categorize things into beliefs/​nonbeliefs, and then act based on whether you believe N or not, you are throwing away the information contained in the probability judgement, in order to save computation time. It is especially egregious if the threshold you use to categorize things into beliefs/​nonbeliefs is relatively constant, rather than being a function of (expected value of N) /​ (expected value of not N).

If your neighbor took out fire insurance on his house, you wouldn’t infer that he believed his house was going to burn down. And if he took his umbrella to work, you wouldn’t (I hope) infer that he believed it was going to rain.

Yet when it comes to decisions on a national scale, people cast things in terms of belief. Do you believe North Korea will sell nuclear weapons to Syria? That’s the wrong question when you’re dealing with a country that has, let’s say, a 20% chance of building weapons that will be used to level at least ten major US cities.

Or flash back to the 1990s, before there was a scientific consensus that global warming was real. People would often say, “I don’t believe in global warming.” And interviews with scientists tried to discern whether they did or did not believe in global warming.

It’s the wrong question. The question is what steps are worth taking according to your assigned probabilities and expected-value computations.

A scientist doesn’t have to believe in something to consider it worthy of study. Do you believe an asteroid will hit the Earth this century? Do you believe we can cure aging in your lifetime? Do you believe we will have a hard-takeoff singularity? If a low-probability outcome can have a high impact on expected utility, you’ve already gone wrong when you ask the question.