Horrible LHC Inconsistency

Fol­lowup to: When (Not) To Use Prob­a­bil­ities, How Many LHC Failures Is Too Many?

While try­ing to an­swer my own ques­tion on “How Many LHC Failures Is Too Many?” I re­al­ized that I’m hor­ren­dously in­con­sis­tent with re­spect to my stated be­liefs about dis­aster risks from the Large Hadron Col­lider.

First, I thought that stat­ing a “one-in-a-mil­lion” prob­a­bil­ity for the Large Hadron Col­lider de­stroy­ing the world was too high, in the sense that I would much rather run the Large Hadron Col­lider than press a but­ton with a known 11,000,000 prob­a­bil­ity of de­stroy­ing the world.

But if you asked me whether I could make one mil­lion state­ments of au­thor­ity equal to “The Large Hadron Col­lider will not de­stroy the world”, and be wrong, on av­er­age, around once, then I would have to say no.

Un­known pointed out that this turns me into a money pump. Given a port­fo­lio of a mil­lion ex­is­ten­tial risks to which I had as­signed a “less than one in a mil­lion prob­a­bil­ity”, I would rather press the but­ton on the fixed-prob­a­bil­ity de­vice than run a ran­dom risk from this port­fo­lio; but would rather take any par­tic­u­lar risk in this port­fo­lio than press the but­ton.

Then, I con­sid­ered the ques­tion of how many mys­te­ri­ous failures at the LHC it would take to make me ques­tion whether it might de­stroy the world/​uni­verse some­how, and what this re­vealed about my prior prob­a­bil­ity.

If the failure prob­a­bil­ity had a known 50% prob­a­bil­ity of oc­cur­ring from nat­u­ral causes, like a quan­tum coin or some such… then I sus­pect that if I ac­tu­ally saw that coin come up heads 20 times in a row, I would feel a strong im­pulse to bet on it com­ing up heads the next time around. (And that’s tak­ing into ac­count my un­cer­tainty about whether the an­thropic prin­ci­ple re­ally works that way.)

Even hav­ing no­ticed this triple in­con­sis­tency, I’m not sure in which di­rec­tion to re­solve it!

(But I still main­tain my re­solve that the LHC is not worth ex­pend­ing poli­ti­cal cap­i­tal, fi­nan­cial cap­i­tal, or our time to shut down; com­pared with us­ing the same cap­i­tal to worry about su­per­hu­man in­tel­li­gence or nan­otech­nol­ogy.)