Reading this clarified something for me. In particular, “Banish talk like “There is absolutely no evidence for that belief”.
OK, I can see that mathematically there can be very small amounts of evidence for some propositions (e.g. the existence of the deity Thor.) However in practice there is a limit to how small evidence can be for me to make any practical use of it. If we assign certainties to our beliefs on a scale of 0 to 100, then what can I realistically do with a bit of evidence that moves me from 87 to 87.01? or 86.99? I don’t think I can estimate my certainty accurately to 1 decimal place—in fact I’m not sure I can get it to within one significant digit on many issues—and yet there’s a lot of evidence in the world that should move my beliefs by a lot less than that.
Mathematically it makes sense to update on all evidence. Practically, there is a fuzzy threshold beyond which I need to just ignore very weak evidence, unless there’s so much of it that the sum total crosses the bounds of significance.
very small amounts of evidence for some propositions (e.g. the existence of the deity Thor.)
Very small amounts of evidence? Entire mythologies are quite strong evidence of something thor-like. The point is to be able to say “I don’t believe in Thor” and “That is strongish evidence for the existence of Thor” without conflict.
Your point about neglecting small shifts (likelihood ratio 1.0001) is well made, but your numbers are too charitable. When someone says “there is no evidence for X”, there is usually some substantial piece of evidence (LR>10) evidence, even quite strong evidence, known to them, not a tiny shift, but not totally conclusive either. The problem is that even substantial evidence usually has the problem you are pointing out (cost of consideration exceeds Value of Inforation).
Practically, there is a fuzzy threshold beyond which I need to just ignore very weak evidence, unless there’s so much of it that the sum total crosses the bounds of significance.
Consider the difficulties of programming something like that:
Ignore evidence. If the accumulated ignored evidence crosses some threshold, process the whole of it.
You see the problem. If the quoted sentence is your preferred modus operandi, you’ll have to restrict what you mean by “ignore”. You’ll still need to file it somewhere, and to evaluate it somehow, just so when the cumulative weight exceeds your given threshold, you’ll be able to still update on it.
Realistically, humans seem to ignore it (and forget about it) unless they get a lot all at once. Yes, that’s a failure mode, but it’s not usually a major problem.
Or, I suppose, if they want to believe it, but that’s hardly the same thing.
Reading this clarified something for me. In particular, “Banish talk like “There is absolutely no evidence for that belief”.
OK, I can see that mathematically there can be very small amounts of evidence for some propositions (e.g. the existence of the deity Thor.) However in practice there is a limit to how small evidence can be for me to make any practical use of it. If we assign certainties to our beliefs on a scale of 0 to 100, then what can I realistically do with a bit of evidence that moves me from 87 to 87.01? or 86.99? I don’t think I can estimate my certainty accurately to 1 decimal place—in fact I’m not sure I can get it to within one significant digit on many issues—and yet there’s a lot of evidence in the world that should move my beliefs by a lot less than that.
Mathematically it makes sense to update on all evidence. Practically, there is a fuzzy threshold beyond which I need to just ignore very weak evidence, unless there’s so much of it that the sum total crosses the bounds of significance.
Very small amounts of evidence? Entire mythologies are quite strong evidence of something thor-like. The point is to be able to say “I don’t believe in Thor” and “That is strongish evidence for the existence of Thor” without conflict.
Your point about neglecting small shifts (likelihood ratio 1.0001) is well made, but your numbers are too charitable. When someone says “there is no evidence for X”, there is usually some substantial piece of evidence (LR>10) evidence, even quite strong evidence, known to them, not a tiny shift, but not totally conclusive either. The problem is that even substantial evidence usually has the problem you are pointing out (cost of consideration exceeds Value of Inforation).
Consider the difficulties of programming something like that:
Ignore evidence. If the accumulated ignored evidence crosses some threshold, process the whole of it.
You see the problem. If the quoted sentence is your preferred modus operandi, you’ll have to restrict what you mean by “ignore”. You’ll still need to file it somewhere, and to evaluate it somehow, just so when the cumulative weight exceeds your given threshold, you’ll be able to still update on it.
Realistically, humans seem to ignore it (and forget about it) unless they get a lot all at once. Yes, that’s a failure mode, but it’s not usually a major problem.
Or, I suppose, if they want to believe it, but that’s hardly the same thing.