I am coming to the conclusion that “extraordinary claims require extraordinary evidence” is just bad advice, precisely because it causes people to conflate large consequences and prior improbability. People are fond of saying it about cryonics, for example.
At least sometimes, people may say “extraordinary claims require extraordinary evidence” when they mean “your large novel claim has set off my fraud risk detector; please show me how you’re not a scam.”
In other words, the caution being expressed is not about prior probabilities in the natural world, but rather the intentions and morals of the claimant.
Well, consider strategic point of view. Suppose that a system (humans) is known for it’s poor performance at evaluating the claims without performing direct experimentation. Long, long history of such failures.
Consider also that a false high-impact claim can ruin ability of this system to perform it’s survival function, with again a long history of such events; the damage is proportionally to the claimed impact. (Mayans are a good example, killing people so that the sun will rise tomorrow; great utilitarian rationalists they were; believing that their reasoning is perfect enough to warrant such action. Note that donating to a wrong charity instead of a right one kills people)
When we anticipate that a huge percentage of the claims will be false, we can build the system to require evidence that if the claim was false the system would be in a small probability world (i.e. require that for a claim evidence was collected so that p(evidence | ~claim)/p(evidence | claim) is low), to make the system, once deployed, fall off the cliffs less often. The required strength of the evidence is then increasing with impact of the claim.
It is not an ideal strategy, but it is the one that works given the limitations. There are other strategies and it is not straightforward to improve performance (and easy to degrade performance by making idealized implicit assumptions).
I am coming to the conclusion that “extraordinary claims require extraordinary evidence” is just bad advice, precisely because it causes people to conflate large consequences and prior improbability. People are fond of saying it about cryonics, for example.
At least sometimes, people may say “extraordinary claims require extraordinary evidence” when they mean “your large novel claim has set off my fraud risk detector; please show me how you’re not a scam.”
In other words, the caution being expressed is not about prior probabilities in the natural world, but rather the intentions and morals of the claimant.
We need two new versions of the advice, to satisfy everyone.
Version for scientists: “improbable claims require extraordinary evidence”.
Versions for politicians: “inconvenient claims require extraordinary evidence”.
Well, consider strategic point of view. Suppose that a system (humans) is known for it’s poor performance at evaluating the claims without performing direct experimentation. Long, long history of such failures.
Consider also that a false high-impact claim can ruin ability of this system to perform it’s survival function, with again a long history of such events; the damage is proportionally to the claimed impact. (Mayans are a good example, killing people so that the sun will rise tomorrow; great utilitarian rationalists they were; believing that their reasoning is perfect enough to warrant such action. Note that donating to a wrong charity instead of a right one kills people)
When we anticipate that a huge percentage of the claims will be false, we can build the system to require evidence that if the claim was false the system would be in a small probability world (i.e. require that for a claim evidence was collected so that p(evidence | ~claim)/p(evidence | claim) is low), to make the system, once deployed, fall off the cliffs less often. The required strength of the evidence is then increasing with impact of the claim.
It is not an ideal strategy, but it is the one that works given the limitations. There are other strategies and it is not straightforward to improve performance (and easy to degrade performance by making idealized implicit assumptions).