Good post, I think you’re looking in the right direction :-)
It can be included as an axiom in a system: for example, I can believe the axioms of PA with probability 1, that ZFC is consistent with probability 99%, that the RH is true with probability 95%, that the RH is provable in ZFC with probability 90%, or that the results of my inference algorithm are “well-calibrated” in a precise sense with probability 50%.
More generally, you could think about formal systems where each axiom comes with a prior probability. I’m not sure if such things have been studied, but it sounds promising. But the million dollar question here is this: when you’re given a new mathematical statement that doesn’t seem to be immediately provable or disprovable from your axioms, how do you assign it a probability value? I remember thinking about this a lot without success.
I can also derive probabilistic knowledge from exact knowledge: for example, I can believe that random strings probably have high Kolmogorov complexity; I can believe that if you randomly choose whether to negate a statement, the result is true with probability 50%.
True, but I’m not sure how such things can help you when you’re facing “true” logical uncertainty in a deterministic universe. Possibly relevant discussion on MathOverflow: Can randomness add computability?
when you’re given a new mathematical statement that doesn’t seem to be immediately provable or disprovable from your axioms, how do you assign it a probability value?
50%? Sometimes you can infer its likely truth value from the conditions in which you’ve come upon it, but given that there are about as many false statements as true ones of any form, this default seems right. Then, there could be lots of syntactic heuristics that allow you to adjust this initial estimate.
“Syntactic heuristics” is a nice turn of phrase, but you could have just as well said “you can always say 50%, or maybe use some sort of clever algorithm”. Not very helpful.
I don’t expect there is any clever simple trick to it. (But I also don’t think that assigning probabilities to logical statements is a terribly useful activity, or one casting foundational light on decision theory.)
But I also don’t think that assigning probabilities to logical statements is a terribly useful activity, or one casting foundational light on decision theory.
Can you explain this a bit more? Do you have any reasons for this suspicion?
I don’t have any reasons for the suspicion that assigning probabilities to logical statements casts foundational light on decision theory. I don’t see how having such an assignment helps any.
Good post, I think you’re looking in the right direction :-)
More generally, you could think about formal systems where each axiom comes with a prior probability. I’m not sure if such things have been studied, but it sounds promising. But the million dollar question here is this: when you’re given a new mathematical statement that doesn’t seem to be immediately provable or disprovable from your axioms, how do you assign it a probability value? I remember thinking about this a lot without success.
True, but I’m not sure how such things can help you when you’re facing “true” logical uncertainty in a deterministic universe. Possibly relevant discussion on MathOverflow: Can randomness add computability?
50%? Sometimes you can infer its likely truth value from the conditions in which you’ve come upon it, but given that there are about as many false statements as true ones of any form, this default seems right. Then, there could be lots of syntactic heuristics that allow you to adjust this initial estimate.
“Syntactic heuristics” is a nice turn of phrase, but you could have just as well said “you can always say 50%, or maybe use some sort of clever algorithm”. Not very helpful.
I don’t expect there is any clever simple trick to it. (But I also don’t think that assigning probabilities to logical statements is a terribly useful activity, or one casting foundational light on decision theory.)
Can you explain this a bit more? Do you have any reasons for this suspicion?
I don’t have any reasons for the suspicion that assigning probabilities to logical statements casts foundational light on decision theory. I don’t see how having such an assignment helps any.