Horrible LHC Inconsistency

Followup to: When (Not) To Use Probabilities, How Many LHC Failures Is Too Many?

While trying to answer my own question on “How Many LHC Failures Is Too Many?” I realized that I’m horrendously inconsistent with respect to my stated beliefs about disaster risks from the Large Hadron Collider.

First, I thought that stating a “one-in-a-million” probability for the Large Hadron Collider destroying the world was too high, in the sense that I would much rather run the Large Hadron Collider than press a button with a known 11,000,000 probability of destroying the world.

But if you asked me whether I could make one million statements of authority equal to “The Large Hadron Collider will not destroy the world”, and be wrong, on average, around once, then I would have to say no.

Unknown pointed out that this turns me into a money pump. Given a portfolio of a million existential risks to which I had assigned a “less than one in a million probability”, I would rather press the button on the fixed-probability device than run a random risk from this portfolio; but would rather take any particular risk in this portfolio than press the button.

Then, I considered the question of how many mysterious failures at the LHC it would take to make me question whether it might destroy the world/​universe somehow, and what this revealed about my prior probability.

If the failure probability had a known 50% probability of occurring from natural causes, like a quantum coin or some such… then I suspect that if I actually saw that coin come up heads 20 times in a row, I would feel a strong impulse to bet on it coming up heads the next time around. (And that’s taking into account my uncertainty about whether the anthropic principle really works that way.)

Even having noticed this triple inconsistency, I’m not sure in which direction to resolve it!

(But I still maintain my resolve that the LHC is not worth expending political capital, financial capital, or our time to shut down; compared with using the same capital to worry about superhuman intelligence or nanotechnology.)