Well, when I was working on a S5W/S3G (MTS 635, SSBN 732 blue) power plant, our baseline “end of the world” scenario started with “a non-isolateable double-ended shear of a main coolant loop”. (half of the power plant falls off). I can’t begin to estimate the likelihood of that failure, but I think quantum mechanics can.
If classical mechanics gives you a failure rate that has uncertainty, you can incorporate that uncertainty into your final uncertainty: “We believe it is four nines or better that this type of valve fails in this manner with this frequency or less.”
And at some point, you don’t trust the traffic engineer to guess the load- you post a load limit on the bridge.
So they can say “our model tells us that, with these point inputs, this happens once in a billion times,” but they can’t yet say “with our model and these input distributions, the chance of this happening more than once in a million times is less than one in a thousand,” which must be true for the first statement to be useful as an upper bound of the estimate (rather than an expected value of the estimate).
Why not? Can’t we integrate over all of the input distributions, and compare the total volume of input distributions with failure chance greater than one in N with the total volume of all input distributions?
Why not? Can’t we integrate over all of the input distributions, and compare the total volume of input distributions with failure chance greater than one in N with the total volume of all input distributions?
The impression I got was that this is the approach that they would take with infinite computing power, but that it took a significant amount of time to determine if any particular combination of input variables would lead to a failure chance greater than one in N, meaning normal integration won’t work. There are a couple of different ways to attack that problem, each making different tradeoffs.
If each data point is prohibitively expensive, then the only thing I can suggest is limiting the permissible input distributions. If that’s not possible, I think the historical path is to continue to store the waste in pools at each power plant while future research and politics is done on the problem.
Well, when I was working on a S5W/S3G (MTS 635, SSBN 732 blue) power plant, our baseline “end of the world” scenario started with “a non-isolateable double-ended shear of a main coolant loop”. (half of the power plant falls off). I can’t begin to estimate the likelihood of that failure, but I think quantum mechanics can.
If classical mechanics gives you a failure rate that has uncertainty, you can incorporate that uncertainty into your final uncertainty: “We believe it is four nines or better that this type of valve fails in this manner with this frequency or less.”
And at some point, you don’t trust the traffic engineer to guess the load- you post a load limit on the bridge.
Why not? Can’t we integrate over all of the input distributions, and compare the total volume of input distributions with failure chance greater than one in N with the total volume of all input distributions?
The impression I got was that this is the approach that they would take with infinite computing power, but that it took a significant amount of time to determine if any particular combination of input variables would lead to a failure chance greater than one in N, meaning normal integration won’t work. There are a couple of different ways to attack that problem, each making different tradeoffs.
If each data point is prohibitively expensive, then the only thing I can suggest is limiting the permissible input distributions. If that’s not possible, I think the historical path is to continue to store the waste in pools at each power plant while future research and politics is done on the problem.