First we’d have to attach a meaning to the claim, yes? I’ve seen evidence for various claims about Bayes’ Theorem, including but probably not limited to ‘Any workable extension of logic to deal with uncertainty will approximate Bayes,’ and ‘Bayes works better in practice than frequentist methods’. Decide which claim you want to talk about and you’ll know what evidence against it would look like.
(Halpern more or less argues against the first one, but I’m looking at his article and so far he just seems to be pointing out Jaynes’ most commonsensical requirements.)
I intended the claim posed here about tests and priors. It is posed as p(A|X) = [p(X|A)p(A)]/[p(X|A)p(A) + p(X|~A)*p(~A)]
But does it make sense for that to be wrong? It is a theorem, unlike the statement 2+2=4. Maybe some sort of way to show that the axioms and definitions that are used to prove Baye’s Theorem are inconsistent, which is a pretty clear kind of proof. I’m not sure anymore that what I said has meaning. Well, thanks for the help.
For Godel-Bayes issues, you can start with the responses to my post on the subject. (I’ve since learned and remembered more about Godel.)
We should have the ability to talk about subjective uncertainty in, at the very least, particular proofs and probabilities. I don’t know that we can. But I like the following argument, which I recall seeing here somewhere:
If there exists a perfect probability calculation based on a set of background information, it must take this uncertainty into account. Therefore, applying this uncertainty again to the answer would mean double-counting the evidence, which is strictly verboten. We therefore cannot use this line of reasoning to produce a contradiction. Barring other arguments, we can assume the uncertainty equals a really small fraction.
E.g., suppose a guy comes out tomorrow with a proof of the Riemann Hypothesis. What are the chances he is wrong? Surely not zero.
But the chance that the Riemann Hypothesis itself is wrong, if it has a proof? Well, that kinda seems like zero. (But then, how would we know that? It does seem like we have to filter through our unreliable senses.)
Hrmm… I’m still taking high school geometry, so “infinite set of axioms” doesn’t really make sense yet. I’ll try to re-read that thread once I’ve started college-level math.
First we’d have to attach a meaning to the claim, yes? I’ve seen evidence for various claims about Bayes’ Theorem, including but probably not limited to ‘Any workable extension of logic to deal with uncertainty will approximate Bayes,’ and ‘Bayes works better in practice than frequentist methods’. Decide which claim you want to talk about and you’ll know what evidence against it would look like.
(Halpern more or less argues against the first one, but I’m looking at his article and so far he just seems to be pointing out Jaynes’ most commonsensical requirements.)
I intended the claim posed here about tests and priors. It is posed as
p(A|X) = [p(X|A)p(A)]/[p(X|A)p(A) + p(X|~A)*p(~A)]
But does it make sense for that to be wrong? It is a theorem, unlike the statement 2+2=4. Maybe some sort of way to show that the axioms and definitions that are used to prove Baye’s Theorem are inconsistent, which is a pretty clear kind of proof. I’m not sure anymore that what I said has meaning. Well, thanks for the help.
Uh, 2+2=4 is most definitely a theorem. A very simple and obvious theorem, yes. But a theorem.
For Godel-Bayes issues, you can start with the responses to my post on the subject. (I’ve since learned and remembered more about Godel.)
We should have the ability to talk about subjective uncertainty in, at the very least, particular proofs and probabilities. I don’t know that we can. But I like the following argument, which I recall seeing here somewhere:
If there exists a perfect probability calculation based on a set of background information, it must take this uncertainty into account. Therefore, applying this uncertainty again to the answer would mean double-counting the evidence, which is strictly verboten. We therefore cannot use this line of reasoning to produce a contradiction. Barring other arguments, we can assume the uncertainty equals a really small fraction.
E.g., suppose a guy comes out tomorrow with a proof of the Riemann Hypothesis. What are the chances he is wrong? Surely not zero.
But the chance that the Riemann Hypothesis itself is wrong, if it has a proof? Well, that kinda seems like zero. (But then, how would we know that? It does seem like we have to filter through our unreliable senses.)
Hrmm… I’m still taking high school geometry, so “infinite set of axioms” doesn’t really make sense yet. I’ll try to re-read that thread once I’ve started college-level math.