The Prediction Hierarchy

Re­lated: Ad­vanc­ing Cer­tainty, Re­v­ersed Stu­pidity Is Not Intelligence

The sub­stance of this post is de­rived from a con­ver­sa­tion in the com­ment thread which I have de­cided to pro­mote. Teal;deer: if you have to rely on a calcu­la­tion you may have got­ten wrong for your pre­dic­tion, your ex­pec­ta­tion for the case when your calcu­la­tion is wrong should use a sim­pler calcu­la­tion, such as refer­ence class fore­cast­ing.

Edit 2010-01-19: Toby Ord men­tions in the com­ments Prob­ing the Im­prob­a­ble: Method­olog­i­cal Challenges for Risks with Low Prob­a­bil­ities and High Stakes (PDF) by Toby Ord, Ra­faela Hiller­brand, and An­ders Sand­berg of the Fu­ture of Hu­man­ity In­sti­tute, Univer­sity of Oxford. It uses a similar math­e­mat­i­cal ar­gu­ment, but is much more sub­stan­tive than this.

A lot­tery has a jack­pot of a mil­lion dol­lars. A ticket costs one dol­lar. Odds of a given ticket win­ning are ap­prox­i­mately one in forty mil­lion. If your util­ity is lin­ear in dol­lars, should you bet?

The ob­vi­ous (and cor­rect) an­swer is “no”. The clever (and in­cor­rect) an­swer is “yes”, as fol­lows:

Ac­cord­ing to your calcu­la­tions, “this ticket will not win the lot­tery” is true with prob­a­bil­ity 99.9999975%. But can you re­ally be sure that you can calcu­late any­thing to that good odds? Surely you couldn’t ex­pect to make forty mil­lion pre­dic­tions of which you were that con­fi­dent and only be wrong once. Ra­tion­ally, you ought to as­cribe a lower con­fi­dence to the state­ment: 99.99%, for ex­am­ple. But this means a 0.01% chance of win­ning the lot­tery, cor­re­spond­ing to an ex­pected value of a hun­dred dol­lars. There­fore, you should buy the ticket.

The logic is not ob­vi­ously wrong, but where is the er­ror?

First, let us write out the calcu­la­tion alge­braically. Let E(L) be the ex­pected value of play­ing the lot­tery. Let p(L) be your calcu­lated prob­a­bil­ity that the lot­tery will pay off. Let p(C) be your prob­a­bil­ity that your calcu­la­tions are cor­rect. Fi­nally, let j rep­re­sent the value of the jack­pot and let t rep­re­sent the price of the ticket. The ob­vi­ous way to write the clever the­ory is:

E(L) = max(p(L), 1-p(C)) * j—t

This doesn’t sound quite right, though—surely you should as­cribe a higher con­fi­dence when you calcu­late a higher prob­a­bil­ity. That said, when p(L) is much less than p(C), it shouldn’t make a large differ­ence. The straight­for­ward way to ac­count for this is to take p(C) as the prob­a­bil­ity that p(L) is cor­rect, and write the fol­low­ing:

E(L) = [ p(C)*p(L) + 1-p(C) ] * j—t

which can be re­ar­ranged as:

E(L) = p(C) * [p(L)*j—t] + (1-p(C)) * [j—t]

I be­lieve this ex­poses the prob­lem with the clever ar­gu­ment quite ex­plic­itly. Why, if your calcu­la­tions are in­cor­rect (prob­a­bil­ity 1-p(C)), should you as­sume that you are cer­tain to win the lot­tery? If your calcu­la­tions are in­cor­rect, they should tell you al­most noth­ing about whether you will win the lot­tery or not. So what do you do?

What ap­pears to me the el­e­gant solu­tion is to use a less com­plex calcu­la­tion—or a se­ries of less com­plex calcu­a­tions—to act as your backup hy­poth­e­sis. In a tricky en­g­ineer­ing prob­lem (say, calcu­lat­ing the effec­tive­ness of a heat sink), your pri­mary pre­dic­tion might come out of a finite el­e­ment fluid dy­nam­ics calcu­la­tor with p(C) = 0.99 and nar­row er­ror bars, but you would also re­fer to the re­sult of a sim­ple alge­braic model with p(C) = 0.9999 and much wider er­ror bars. And then you would back­stop the lot with your back­ground knowl­edge about heat sinks in gen­eral, writ­ten with wide enough er­ror bars to call p(C) = 1 - ep­silon.

In this case, though, the calcu­la­tion was sim­ple, so our backup pre­dic­tion is just the back­ground knowl­edge. Say that, know­ing noth­ing about a lot­tery but “it’s a lot­tery”, we would have an ex­pected pay­off e. Then we write:

E(L) = p(C) * [p(L)*j—t] + (1-p(C)) * e

I don’t know about you, but for me, e is ap­prox­i­mately equal to -t. And jus­tice is re­stored.


We are ad­vised that, when solv­ing hard prob­lems, we should solve mul­ti­ple prob­lems at once. This is rel­a­tively triv­ial, but I can point out a cou­ple other rel­a­tively triv­ial ex­am­ples where it shows up well:

Sup­pose the lot­tery ap­pears to be marginally prof­itable: should you bet on it? Not un­less you are con­fi­dent in your num­bers.

Sup­pose we con­sider the LHC. Should we (have) switch(ed) it on? Once you’ve checked that it is safe, yes. As a high-en­ergy physics ex­per­i­ment, the backup com­par­i­son would be to things like nu­clear en­ergy, which have only small chances of dev­as­ta­tion on the plane­tary scale. If your calcu­la­tions were to in­di­cate that the LHC is com­pletely safe, even if your P(C) were as low as three or four nines (99.9%, 99.99%), your ac­tual es­ti­mate of the safety of turn­ing it on should be no lower than six or seven nines, and prob­a­bly higher. (In point of fact, given the num­ber of physi­cists an­a­lyz­ing the ques­tion, P(C) is much higher. Three cheers for in­ter­sub­jec­tive ver­ifi­ca­tion.)

Sup­pose we con­sider our Christ­mas shop­ping? When you’re es­ti­mat­ing your time to finish your shop­ping, your calcu­la­tions are not very re­li­able. There­fore your an­swer is strongly dom­i­nated by the sim­pler, much more re­li­able refer­ence class pre­dic­tion.

But what are the odds that this ticket won’t win the lot­tery? …how many nines do I type, again?