What’s the probability the LHC will save the world? That either some side effect of running it, or some knowledge gained from it, will prevent a future catastrophe? At least of the same order of fuzzy small non-zeroness as the doomsday scenario.
I think that’s the larger fault here. You don’t just have to show that X has some chance of being bad in order to justify being against it, you also have to show it it’s predictably worse than not-X. If you can’t, then the uncertain badness is better read as noise at the straining limit of your ability to predict—and that to me adds back up to normality.
What’s the probability the LHC will save the world? That either some side effect of running it, or some knowledge gained from it, will prevent a future catastrophe? At least of the same order of fuzzy small non-zeroness as the doomsday scenario.
I think that’s the larger fault here. You don’t just have to show that X has some chance of being bad in order to justify being against it, you also have to show it it’s predictably worse than not-X. If you can’t, then the uncertain badness is better read as noise at the straining limit of your ability to predict—and that to me adds back up to normality.