Confidence levels inside and outside an argument

Re­lated to: In­finite Certainty

Sup­pose the peo­ple at FiveThir­tyEight have cre­ated a model to pre­dict the re­sults of an im­por­tant elec­tion. After crunch­ing poll data, area de­mo­graph­ics, and all the usual things one crunches in such a situ­a­tion, their model re­turns a greater than 999,999,999 in a billion chance that the in­cum­bent wins the elec­tion. Sup­pose fur­ther that the re­sults of this model are your only data and you know noth­ing else about the elec­tion. What is your con­fi­dence level that the in­cum­bent wins the elec­tion?

Mine would be sig­nifi­cantly less than 999,999,999 in a billion.

When an ar­gu­ment gives a prob­a­bil­ity of 999,999,999 in a billion for an event, then prob­a­bly the ma­jor­ity of the prob­a­bil­ity of the event is no longer in “But that still leaves a one in a billion chance, right?”. The ma­jor­ity of the prob­a­bil­ity is in “That ar­gu­ment is flawed”. Even if you have no par­tic­u­lar rea­son to be­lieve the ar­gu­ment is flawed, the back­ground chance of an ar­gu­ment be­ing flawed is still greater than one in a billion.


More than one in a billion times a poli­ti­cal sci­en­tist writes a model, ey will get com­pletely con­fused and write some­thing with no re­la­tion to re­al­ity. More than one in a billion times a pro­gram­mer writes a pro­gram to crunch poli­ti­cal statis­tics, there will be a bug that com­pletely in­val­i­dates the re­sults. More than one in a billion times a staffer at a web­site pub­lishes the re­sults of a poli­ti­cal calcu­la­tion on­line, ey will ac­ci­den­tally switch which can­di­date goes with which chance of win­ning.

So one must dis­t­in­guish be­tween lev­els of con­fi­dence in­ter­nal and ex­ter­nal to a spe­cific model or ar­gu­ment. Here the model’s in­ter­nal level of con­fi­dence is 999,999,999/​billion. But my ex­ter­nal level of con­fi­dence should be lower, even if the model is my only ev­i­dence, by an amount pro­por­tional to my trust in the model.



Is That Really True?

One might be tempted to re­spond “But there’s an equal chance that the false model is too high, ver­sus that it is too low.” Maybe there was a bug in the com­puter pro­gram, but it pre­vented it from giv­ing the in­cum­bent’s real chances of 999,999,999,999 out of a trillion.

The prior prob­a­bil­ity of a can­di­date win­ning an elec­tion is 50%1. We need in­for­ma­tion to push us away from this prob­a­bil­ity in ei­ther di­rec­tion. To push sig­nifi­cantly away from this prob­a­bil­ity, we need strong in­for­ma­tion. Any weak­ness in the in­for­ma­tion weak­ens its abil­ity to push away from the prior. If there’s a flaw in FiveThir­tyEight’s model, that takes us away from their prob­a­bil­ity of 999,999,999 in of a billion, and back closer to the prior prob­a­bil­ity of 50%

We can con­firm this with a quick san­ity check. Sup­pose we know noth­ing about the elec­tion (ie we still think it’s 50-50) un­til an in­sane per­son re­ports a hal­lu­ci­na­tion that an an­gel has de­clared the in­cum­bent to have a 999,999,999/​billion chance. We would not be tempted to ac­cept this figure on the grounds that it is equally likely to be too high as too low.

A sec­ond ob­jec­tion cov­ers situ­a­tions such as a lot­tery. I would like to say the chance that Bob wins a lot­tery with one billion play­ers is 11 billion. Do I have to ad­just this up­ward to cover the pos­si­bil­ity that my model for how lot­ter­ies work is some­how flawed? No. Even if I am mi­s­un­der­stand­ing the lot­tery, I have not de­parted from my prior. Here, new in­for­ma­tion re­ally does have an equal chance of go­ing against Bob as of go­ing in his fa­vor. For ex­am­ple, the lot­tery may be fixed (mean­ing my origi­nal model of how to de­ter­mine lot­tery win­ners is fatally flawed), but there is no greater rea­son to be­lieve it is fixed in fa­vor of Bob than any­one else.2

Spot­ted in the Wild

The re­cent Pas­cal’s Mug­ging thread spawned a dis­cus­sion of the Large Hadron Col­lider de­stroy­ing the uni­verse, which also got con­tinued on an older LHC thread from a few years ago. Every­one in­volved agreed the chances of the LHC de­stroy­ing the world were less than one in a mil­lion, but sev­eral peo­ple gave ex­traor­di­nar­ily low chances based on cos­mic ray col­li­sions. The ar­gu­ment was that since cos­mic rays have been perform­ing par­ti­cle col­li­sions similar to the LHC’s zillions of times per year, the chance that the LHC will de­stroy the world is ei­ther liter­ally zero, or else a num­ber re­lated to the prob­a­bil­ity that there’s some chance of a cos­mic ray de­stroy­ing the world so minis­cule that it hasn’t got­ten ac­tu­al­ized in zillions of cos­mic ray col­li­sions. Of the com­menters men­tion­ing this ar­gu­ment, one gave a prob­a­bil­ity of 1/​3*10^22, an­other sug­gested 1/​10^25, both of which may be good num­bers for the in­ter­nal con­fi­dence of this ar­gu­ment.

But the con­nec­tion be­tween this ar­gu­ment and the gen­eral LHC ar­gu­ment flows through state­ments like “col­li­sions pro­duced by cos­mic rays will be ex­actly like those pro­duced by the LHC”, “our un­der­stand­ing of the prop­er­ties of cos­mic rays is largely cor­rect”, and “I’m not high on drugs right now, star­ing at a pack­age of M&Ms and mis­tak­ing it for a re­ally in­tel­li­gent ar­gu­ment that bears on the LHC ques­tion”, all of which are prob­a­bly more likely than 1/​10^20. So in­stead of say­ing “the prob­a­bil­ity of an LHC apoc­a­lypse is now 1/​10^20”, say “I have an ar­gu­ment that has an in­ter­nal prob­a­bil­ity of an LHC apoc­a­lypse as 1/​10^20, which low­ers my prob­a­bil­ity a bit de­pend­ing on how much I trust that ar­gu­ment”.

In fact, the ar­gu­ment has a po­ten­tial flaw: ac­cord­ing to Gid­dings and Mangano, the physi­cists offi­cially tasked with in­ves­ti­gat­ing LHC risks, black holes from cos­mic rays might have enough mo­men­tum to fly through Earth with­out harm­ing it, and black holes from the LHC might not3. This was pre­dictable: this was a sim­ple ar­gu­ment in a com­plex area try­ing to prove a nega­tive, and it would have been pre­sump­tous to be­lieve with greater than 99% prob­a­bil­ity that it was flawless. If you can only give 99% prob­a­bil­ity to the ar­gu­ment be­ing sound, then it can only re­duce your prob­a­bil­ity in the con­clu­sion by a fac­tor of a hun­dred, not a fac­tor of 10^20.

But it’s hard for me to be prop­erly out­raged about this, since the LHC did not de­stroy the world. A bet­ter ex­am­ple might be the fol­low­ing, taken from an on­line dis­cus­sion of cre­ation­ism4 and ap­par­ently based off of some­thing by Fred Hoyle:

In or­der for a sin­gle cell to live, all of the parts of the cell must be as­sem­bled be­fore life starts. This in­volves 60,000 pro­teins that are as­sem­bled in roughly 100 differ­ent com­bi­na­tions. The prob­a­bil­ity that these com­plex group­ings of pro­teins could have hap­pened just by chance is ex­tremely small. It is about 1 chance in 10 to the 4,478,296 power. The prob­a­bil­ity of a liv­ing cell be­ing as­sem­bled just by chance is so small, that you may as well con­sider it to be im­pos­si­ble. This means that the prob­a­bil­ity that the liv­ing cell is cre­ated by an in­tel­li­gent cre­ator, that de­signed it, is ex­tremely large. The prob­a­bil­ity that God cre­ated the liv­ing cell is 10 to the 4,478,296 power to 1.

Note that some­one just gave a con­fi­dence level of 10^4478296 to one and was wrong. This is the sort of thing that should never ever hap­pen. This is pos­si­bly the most wrong any­one has ever been.

It is hard to say in words ex­actly how wrong this is. Say­ing “This per­son would be will­ing to bet the en­tire world GDP for a thou­sand years if evolu­tion were true against a one in one mil­lion chance of re­ceiv­ing a sin­gle penny if cre­ation­ism were true” doesn’t even be­gin to cover it: a mere 1/​10^25 would suffice there. Say­ing “This per­son be­lieves he could make one state­ment about an is­sue as difficult as the ori­gin of cel­lu­lar life per Planck in­ter­val, ev­ery Planck in­ter­val from the Big Bang to the pre­sent day, and not be wrong even once” only brings us to 1/​10^61 or so. If the chance of get­ting Ganser’s Syn­drome, the ex­traor­di­nar­ily rare psy­chi­a­tric con­di­tion that man­i­fests in a com­pul­sion to say false state­ments, is one in a hun­dred mil­lion, and the world’s top hun­dred thou­sand biol­o­gists all agree that evolu­tion is true, then this per­son should prefer­en­tially be­lieve it is more likely that all hun­dred thou­sand have si­mul­ta­neously come down with Ganser’s Syn­drome than that they are do­ing good biol­ogy5

This cre­ation­ist’s flaw wasn’t math­e­mat­i­cal; the math prob­a­bly does re­turn that num­ber. The flaw was con­fus­ing the in­ter­nal prob­a­bil­ity (that com­plex life would form com­pletely at ran­dom in a way that can be rep­re­sented with this par­tic­u­lar al­gorithm) with the ex­ter­nal prob­a­bil­ity (that life could form with­out God). He should have added a term rep­re­sent­ing the chance that his knock­down ar­gu­ment just didn’t ap­ply.

Fi­nally, con­sider the ques­tion of whether you can as­sign 100% cer­tainty to a math­e­mat­i­cal the­o­rem for which a proof ex­ists. Eliezer has already ex­am­ined this is­sue and come out against it (cit­ing as an ex­am­ple this story of Peter de Blanc’s). In fact, this is just the spe­cific case of differ­en­ti­at­ing in­ter­nal ver­sus ex­ter­nal prob­a­bil­ity when in­ter­nal prob­a­bil­ity is equal to 100%. Now your prob­a­bil­ity that the the­o­rem is false is en­tirely based on the prob­a­bil­ity that you’ve made some mis­take.

The many math­e­mat­i­cal proofs that were later over­turned provide prac­ti­cal jus­tifi­ca­tion for this mind­set.

This is not a fully gen­eral ar­gu­ment against giv­ing very high lev­els of con­fi­dence: very com­plex situ­a­tions and situ­a­tions with many ex­clu­sive pos­si­ble out­comes (like the lot­tery ex­am­ple) may still make it to the 1/​10^20 level, albeit prob­a­bly not the 1/​10^4478296. But in other sorts of cases, giv­ing a very high level of con­fi­dence re­quires a check that you’re not con­fus­ing the prob­a­bil­ity in­side one ar­gu­ment with the prob­a­bil­ity of the ques­tion as a whole.

Footnotes

1. Although tech­ni­cally we know we’re talk­ing about an in­cum­bent, who typ­i­cally has a much higher chance, around 90% in Congress.

2. A par­tic­u­larly de­vi­ous ob­jec­tion might be “What if the lot­tery com­mis­sioner, in a fit of poli­ti­cal cor­rect­ness, de­cides that “ev­ery­one is a win­ner” and splits the jack­pot a billion ways? If this would satisfy your crite­ria for “win­ning the lot­tery”, then this mere pos­si­bil­ity should in­deed move your prob­a­bil­ity up­ward. In fact, since there is prob­a­bly greater than a one in one billion chance of this hap­pen­ing, the ma­jor­ity of your prob­a­bil­ity for Bob win­ning the lot­tery should con­cen­trate here!

3. Gid­dings and Mangano then go on to re-prove the origi­nal “won’t cause an apoc­a­lypse” ar­gu­ment us­ing a more com­pli­cated method in­volv­ing white dwarf stars.

4. While search­ing cre­ation­ist web­sites for the half-re­mem­bered ar­gu­ment I was look­ing for, I found what may be my new fa­vorite quote: “Math­e­mat­i­ci­ans gen­er­ally agree that, statis­ti­cally, any odds be­yond 1 in 10 to the 50th have a zero prob­a­bil­ity of ever hap­pen­ing.”

5. I’m a lit­tle wor­ried that five years from now I’ll see this quoted on some cre­ation­ist web­site as an ac­tual ar­gu­ment.