I suppose, on further reflection, that the metaconfidence concept is simply a heuristic for how we can more accurately compute our probabilities. What I’m actually saying is that, when presented with very suspicious, contradictory, or weak evidence, then the probability that Bayes’ theorem computes is not the real value: it is an approximation of a value that is very probably ⇐ to the calculated value, and the probability distribution below the calculated value, on an asymptotic scale, grows wider exponentially and unboundedly as an inverse function of the size of the calculated probability. Put another way, I think there’s a rational, justifiable reason to exponentially discount evidence at the small end of the probability spectrum.
So what’s the membership test for “suspicious, contradictory, or weak evidence” and what update rule should be used for that kind of evidence if Bayes is biased when dealing with it?
I suppose, on further reflection, that the metaconfidence concept is simply a heuristic for how we can more accurately compute our probabilities. What I’m actually saying is that, when presented with very suspicious, contradictory, or weak evidence, then the probability that Bayes’ theorem computes is not the real value: it is an approximation of a value that is very probably ⇐ to the calculated value, and the probability distribution below the calculated value, on an asymptotic scale, grows wider exponentially and unboundedly as an inverse function of the size of the calculated probability. Put another way, I think there’s a rational, justifiable reason to exponentially discount evidence at the small end of the probability spectrum.
So what’s the membership test for “suspicious, contradictory, or weak evidence” and what update rule should be used for that kind of evidence if Bayes is biased when dealing with it?