Privileged Snuff

So one is asked, “What is your probability estimate that the LHC will destroy the world?”

Leaving aside the issue of calling brown numbers probabilities, there is a more subtle rhetorical trap at work here.

If one makes up a small number, say one in a million, the answer will be, “Could you make a million such statements and not be wrong even once?” (Of course this is a misleading image—doing anything a million times in a row would make you tired and distracted enough to make trivial mistakes. At some level we know this argument is misleading, because nobody calls the non-buyer of lottery tickets irrational for assigning an even lower probability to a win.)

If one makes up a larger number, say one in a thousand, then one is considered a bad person for wanting to take even one chance in a thousand of destroying the world.

The fallacy here is http://​​wiki.lesswrong.com/​​wiki/​​Privileging_the_hypothesis

To see why, try inverting the statement: what is your probability estimate that canceling the LHC will result in the destruction of the world?

Unlikely? Well I agree, it is unlikely. But I can think of plausible ways it could be true. New discoveries in physics could be the key to breakthroughs in areas like renewable energy or interstellar travel—breakthroughs that might just make the difference between a universe ultimately filled with intelligent life, and a future of might have been. History shows, after all, that key technologies often arise from unexpected lines of research. I certainly would not be confident in assigning a million to one odds against the LHC making that difference.

Conversely, we know the LHC is not going to destroy the world, because nature has been banging particles together at much higher energy levels for billions of years. If that sufficed to destroy the world, it would already have happened, and any people you might happen to meet from time to time would be figments of a deranged imagination.

The hypothesis being privileged in even asking the original question, is not a harmless one like the tooth fairy. It is the hypothesis that snuffing out progress, extinguishing futures that might have been, is the safe option. It is not really a forgivable mistake, for we already know otherwise—death is the default, not just for individuals, but for nations, civilizations, species and worlds. It could, however, be the ultimate mistake, the one that places the world in a position from which there is no longer a winning move.

So remember a heuristic from a programmer’s toolkit: sometimes the right answer is Wherefore dost thou ask?