When (Not) To Use Probabilities

It may come as a sur­prise to some read­ers of this blog, that I do not always ad­vo­cate us­ing prob­a­bil­ities.

Or rather, I don’t always ad­vo­cate that hu­man be­ings, try­ing to solve their prob­lems, should try to make up ver­bal prob­a­bil­ities, and then ap­ply the laws of prob­a­bil­ity the­ory or de­ci­sion the­ory to what­ever num­ber they just made up, and then use the re­sult as their fi­nal be­lief or de­ci­sion.

The laws of prob­a­bil­ity are laws, not sug­ges­tions, but of­ten the true Law is too difficult for us hu­mans to com­pute. If P != NP and the uni­verse has no source of ex­po­nen­tial com­put­ing power, then there are ev­i­den­tial up­dates too difficult for even a su­per­in­tel­li­gence to com­pute—even though the prob­a­bil­ities would be quite well-defined, if we could af­ford to calcu­late them.

So some­times you don’t ap­ply prob­a­bil­ity the­ory. Espe­cially if you’re hu­man, and your brain has evolved with all sorts of use­ful al­gorithms for un­cer­tain rea­son­ing, that don’t in­volve ver­bal prob­a­bil­ity as­sign­ments.

Not sure where a fly­ing ball will land? I don’t ad­vise try­ing to for­mu­late a prob­a­bil­ity dis­tri­bu­tion over its land­ing spots, perform­ing de­liber­ate Bayesian up­dates on your glances at the ball, and calcu­lat­ing the ex­pected util­ity of all pos­si­ble strings of mo­tor in­struc­tions to your mus­cles.

Try­ing to catch a fly­ing ball, you’re prob­a­bly bet­ter off with your brain’s built-in mechanisms, then us­ing de­liber­a­tive ver­bal rea­son­ing to in­vent or ma­nipu­late prob­a­bil­ities.

But this doesn’t mean you’re go­ing be­yond prob­a­bil­ity the­ory or above prob­a­bil­ity the­ory.

The Dutch Book ar­gu­ments still ap­ply. If I offer you a choice of gam­bles ($10,000 if the ball lands in this square, ver­sus $10,000 if I roll a die and it comes up 6), and you an­swer in a way that does not al­low con­sis­tent prob­a­bil­ities to be as­signed, then you will ac­cept com­bi­na­tions of gam­bles that are cer­tain losses, or re­ject gam­bles that are cer­tain gains...

Which still doesn’t mean that you should try to use de­liber­a­tive ver­bal rea­son­ing. I would ex­pect that for pro­fes­sional base­ball play­ers, at least, it’s more im­por­tant to catch the ball than to as­sign con­sis­tent prob­a­bil­ities. In­deed, if you tried to make up prob­a­bil­ities, the ver­bal prob­a­bil­ities might not even be very good ones, com­pared to some gut-level feel­ing—some word­less rep­re­sen­ta­tion of un­cer­tainty in the back of your mind.

There is noth­ing priv­ileged about un­cer­tainty that is ex­pressed in words, un­less the ver­bal parts of your brain do, in fact, hap­pen to work bet­ter on the prob­lem.

And while ac­cu­rate maps of the same ter­ri­tory will nec­es­sar­ily be con­sis­tent among them­selves, not all con­sis­tent maps are ac­cu­rate. It is more im­por­tant to be ac­cu­rate than to be con­sis­tent, and more im­por­tant to catch the ball than to be con­sis­tent.

In fact, I gen­er­ally ad­vise against mak­ing up prob­a­bil­ities, un­less it seems like you have some de­cent ba­sis for them. This only fools you into be­liev­ing that you are more Bayesian than you ac­tu­ally are.

To be spe­cific, I would ad­vise, in most cases, against us­ing non-nu­mer­i­cal pro­ce­dures to cre­ate what ap­pear to be nu­mer­i­cal prob­a­bil­ities. Num­bers should come from num­bers.

Now there are benefits from try­ing to trans­late your gut feel­ings of un­cer­tainty into ver­bal prob­a­bil­ities. It may help you spot prob­lems like the con­junc­tion fal­lacy. It may help you spot in­ter­nal in­con­sis­ten­cies—though it may not show you any way to rem­edy them.

But you shouldn’t go around think­ing that, if you trans­late your gut feel­ing into “one in a thou­sand”, then, on oc­ca­sions when you emit these ver­bal words, the cor­re­spond­ing event will hap­pen around one in a thou­sand times. Your brain is not so well-cal­ibrated. If in­stead you do some­thing non­ver­bal with your gut feel­ing of un­cer­tainty, you may be bet­ter off, be­cause at least you’ll be us­ing the gut feel­ing the way it was meant to be used.

This spe­cific topic came up re­cently in the con­text of the Large Hadron Col­lider, and an ar­gu­ment given at the Global Catas­trophic Risks con­fer­ence:

That we couldn’t be sure that there was no er­ror in the pa­pers which showed from mul­ti­ple an­gles that the LHC couldn’t pos­si­bly de­stroy the world. And more­over, the the­ory used in the pa­pers might be wrong. And in ei­ther case, there was still a chance the LHC could de­stroy the world. And there­fore, it ought not to be turned on.

Now if the ar­gu­ment had been given in just this way, I would not have ob­jected to its episte­mol­ogy.

But the speaker ac­tu­ally pur­ported to as­sign a prob­a­bil­ity of at least 1 in 1000 that the the­ory, model, or calcu­la­tions in the LHC pa­per were wrong; and a prob­a­bil­ity of at least 1 in 1000 that, if the the­ory or model or calcu­la­tions were wrong, the LHC would de­stroy the world.

After all, it’s surely not so im­prob­a­ble that fu­ture gen­er­a­tions will re­ject the the­ory used in the LHC pa­per, or re­ject the model, or maybe just find an er­ror. And if the LHC pa­per is wrong, then who knows what might hap­pen as a re­sult?

So that is an ar­gu­ment—but to as­sign num­bers to it?

I ob­ject to the air of au­thor­ity given these num­bers pul­led out of thin air. I gen­er­ally feel that if you can’t use prob­a­bil­is­tic tools to shape your feel­ings of un­cer­tainty, you ought not to dig­nify them by call­ing them prob­a­bil­ities.

The al­ter­na­tive I would pro­pose, in this par­tic­u­lar case, is to de­bate the gen­eral rule of ban­ning physics ex­per­i­ments be­cause you can­not be ab­solutely cer­tain of the ar­gu­ments that say they are safe.

I hold that if you phrase it this way, then your mind, by con­sid­er­ing fre­quen­cies of events, is likely to bring in more con­se­quences of the de­ci­sion, and re­mem­ber more rele­vant his­tor­i­cal cases.

If you de­bate just the one case of the LHC, and as­sign spe­cific prob­a­bil­ities, it (1) gives very shaky rea­son­ing an un­due air of au­thor­ity, (2) ob­scures the gen­eral con­se­quences of ap­ply­ing similar rules, and even (3) cre­ates the illu­sion that we might come to a differ­ent de­ci­sion if some­one else pub­lished a new physics pa­per that de­creased the prob­a­bil­ities.

The au­thors at the Global Catas­trophic Risk con­fer­ence seemed to be sug­gest­ing that we could just do a bit more anal­y­sis of the LHC and then switch it on. This struck me as the most dis­in­gen­u­ous part of the ar­gu­ment. Once you ad­mit the ar­gu­ment “Maybe the anal­y­sis could be wrong, and who knows what hap­pens then,” there is no pos­si­ble physics pa­per that can ever get rid of it.

No mat­ter what other physics pa­pers had been pub­lished pre­vi­ously, the au­thors would have used the same ar­gu­ment and made up the same nu­mer­i­cal prob­a­bil­ities at the Global Catas­trophic Risk con­fer­ence. I can­not be sure of this state­ment, of course, but it has a prob­a­bil­ity of 75%.

In gen­eral a ra­tio­nal­ist tries to make their minds func­tion at the best achiev­able power out­put; some­times this in­volves talk­ing about ver­bal prob­a­bil­ities, and some­times it does not, but always the laws of prob­a­bil­ity the­ory gov­ern.

If all you have is a gut feel­ing of un­cer­tainty, then you should prob­a­bly stick with those al­gorithms that make use of gut feel­ings of un­cer­tainty, be­cause your built-in al­gorithms may do bet­ter than your clumsy at­tempts to put things into words.

Now it may be that by rea­son­ing thusly, I may find my­self in­con­sis­tent. For ex­am­ple, I would be sub­stan­tially more alarmed about a lot­tery de­vice with a well-defined chance of 1 in 1,000,000 of de­stroy­ing the world, than I am about the Large Hadron Col­lider be­ing switched on.

On the other hand, if you asked me whether I could make one mil­lion state­ments of au­thor­ity equal to “The Large Hadron Col­lider will not de­stroy the world”, and be wrong, on av­er­age, around once, then I would have to say no.

What should I do about this in­con­sis­tency? I’m not sure, but I’m cer­tainly not go­ing to wave a magic wand to make it go away. That’s like find­ing an in­con­sis­tency in a pair of maps you own, and quickly scrib­bling some al­ter­a­tions to make sure they’re con­sis­tent.

I would also, by the way, be sub­stan­tially more wor­ried about a lot­tery de­vice with a 1 in 1,000,000,000 chance of de­stroy­ing the world, than a de­vice which de­stroyed the world if the Judeo-Chris­tian God ex­isted. But I would not sup­pose that I could make one billion state­ments, one af­ter the other, fully in­de­pen­dent and equally fraught as “There is no God”, and be wrong on av­er­age around once.

I can’t say I’m happy with this state of epistemic af­fairs, but I’m not go­ing to mod­ify it un­til I can see my­self mov­ing in the di­rec­tion of greater ac­cu­racy and real-world effec­tive­ness, not just mov­ing in the di­rec­tion of greater self-con­sis­tency. The goal is to win, af­ter all. If I make up a prob­a­bil­ity that is not shaped by prob­a­bil­is­tic tools, if I make up a num­ber that is not cre­ated by nu­mer­i­cal meth­ods, then maybe I am just defeat­ing my built-in al­gorithms that would do bet­ter by rea­son­ing in their na­tive modes of un­cer­tainty.

Of course this is not a li­cense to ig­nore prob­a­bil­ities that are well-founded. Any nu­mer­i­cal found­ing at all is likely to be bet­ter than a vague feel­ing of un­cer­tainty; hu­mans are ter­rible statis­ti­ci­ans. But pul­ling a num­ber en­tirely out of your butt, that is, us­ing a non-nu­mer­i­cal pro­ce­dure to pro­duce a num­ber, is nearly no foun­da­tion at all; and in that case you prob­a­bly are bet­ter off stick­ing with the vague feel­ings of un­cer­tainty.

Which is why my Over­com­ing Bias posts gen­er­ally use words like “maybe” and “prob­a­bly” and “surely” in­stead of as­sign­ing made-up nu­mer­i­cal prob­a­bil­ities like “40%” and “70%” and “95%”. Think of how silly that would look. I think it ac­tu­ally would be silly; I think I would do worse thereby.

I am not the kind of straw Bayesian who says that you should make up prob­a­bil­ities to avoid be­ing sub­ject to Dutch Books. I am the sort of Bayesian who says that in prac­tice, hu­mans end up sub­ject to Dutch Books be­cause they aren’t pow­er­ful enough to avoid them; and more­over it’s more im­por­tant to catch the ball than to avoid Dutch Books. The math is like un­der­ly­ing physics, in­escapably gov­ern­ing, but too ex­pen­sive to calcu­late. Nor is there any point in a rit­ual of cog­ni­tion which mimics the sur­face forms of the math, but fails to pro­duce sys­tem­at­i­cally bet­ter de­ci­sion-mak­ing. That would be a lost pur­pose; this is not the true art of liv­ing un­der the law.