Logical uncertainty and Mathematical uncertainty

There is a significant difference between uncertainty about mathematical truths in cases where there isn’t a known procedure for checking whether a mathematical claim is true or false, versus when there is but you do not have the computational resources to carry it out. Examples of the former include the Collatz and twin prime conjectures, and examples of the latter include whether or not a given large number is a semi-prime, and what the first decimal digit of Graham’s number is.

The former should not be called logical uncertainty, because it is about what is true, not about what can be proved; I’ll call it mathematical uncertainty instead. The latter really is uncertainty about logic, since we would know that the claim is either proved or refuted by whatever theory we used to prove the algorithm correct, and we would just be uncertain as to which one.

It’s well-known that standard probability theory is a poor fit for handling logical uncertainty because it assumes that the probabilities are logically coherent, and uncertainty about what the logical coherence constraints are is exactly what we want to model; there are no possible outcomes in which the truth-value of a decidable sentence is anything other than what it actually is. But this doesn’t apply to mathematical uncertainty; we could study the probability distribution over complete theories that we converge to as time goes to infinity, and reason about this probability distribution using ordinary probability theory. Possible sources of evidence about math that could be treated with ordinary probability theory include physical experiments and black-boxed human intuitions. But another important source of evidence about mathematical truth is checking examples, and this cannot be reasoned about in ordinary probability theory because each of the examples is assigned probability 1 or 0 in the limit probability distribution, since we can check it. So just because you can reason about mathematical uncertainty using ordinary probability theory doesn’t mean you should.

Logical induction, taken at face value, looks like an attempt at handling mathematical uncertainty, since logical inductors assign probabilities to every sentence, not just sentences known to be decided by the deductive process. But most of the desirable properties of logical inductors that have been proved refer to sequences of decidable sentences, so logical induction only seems potentially valuable for handling logical uncertainty. In fact, the logical induction criterion doesn’t even imply anything about what the probabilities of an undecidable sentence converge to, except that it is not 0 or 1.

Another reason not to trust logical induction too much about mathematical uncertainty is that logical induction gets all its evidence from the proofs of one formal system, and there isn’t one formal system whose proofs completely account for all sources of evidence about mathematical claims. But it seems to me that correctly characterizing how to handle all sources of evidence about mathematical truths in a way precise enough to be turned into an algorithm would be a quite shockingly huge advance in the philosophy of mathematics, and I don’t expect it to happen any time soon.

Fortunately, we might not need to come up with a better way of handling mathematical uncertainty. It seems plausible to me that only logical uncertainty, not mathematical uncertainty more broadly, has any practical use. For instance, in cryptography, an eavesdropper who is uncertain about what a given ciphertext decrypts to has a perfectly good exponential-time algorithm to determine the answer, but lacks the computational resources to run the algorithm. Someone wondering whether their cryptosystem is breakable might phrase their question in terms of existence of an efficient algorithm that accomplishes some task with probability that does not go to 0 as task size approaches infinity, and it might not be clear that the question can be answered by commonly-used powerful axiom systems; but it would be sufficient for practical purposes to know whether there is an efficient algorithm of at most some given size that will accomplish a given computational task with nontrivial probability for tasks of the size that are actually used (or will be used in the near future). This latter question there is a (very computationally expensive) algorithm to determine the answer to. More broadly, physics appears to be computable, so we should expect that in any situation in which we are uncertain about what will happen in the real world due to lack of mathematical knowledge rather than due to lack of physical knowledge, we will have a computation that would resolve our lack of mathematical knowledge if we were capable of running the computation.

Thus it might make sense to give up on assigning probabilities to all sentences, and just try to assign probabilities to sentences that are known to be resolvable by some proof system (for instance, claims about the outputs of programs that are known to halt). Logical induction can easily be modified to do this, by only opening the market for a sentence once the deductive process outputs a proof that the sentence is resolvable. But this alone doesn’t offer any improvement in logical induction. The hope is that by not even trying to assign probabilities to sentences that are not known to be resolvable, it might be possible to find ways of assigning probabilities to resolvable sentences in a way that avoids certain computational limitations that are otherwise inevitable. The properties of logical inductors are all asymptotic, with no computable bounds on rate of convergence, and it is impossible to do better than this in general. Asymptotic properties with no computable bounds on rate of convergence are useless for practical purposes, and I don’t see any reason we can’t do better than this when restricting only to sentences known in advance to be resolvable.

An edge case between logical and mathematical uncertainty is uncertainty about the truth of sentences (or equivalently, of sentences). True sentences are provable in Peano Arithmetic, so uncertainty about their truth can be expressed as uncertainty about the existence of a proof, so this could be described as a case of logical uncertainty. However, since false sentences aren’t necessarily disprovable, handling uncertainty about sentences run into similar difficulties as handling uncertainty about more general arithmetical sentences. And since the physical world doesn’t appear to be able to do hypercomputation, the same arguments that more general mathematical uncertainty is useless seem to apply to general sentences. This is why I’ve been characterizing logical uncertainty as uncertainty about sentences known to be resolvable, even though it could be argued that the term should include uncertainty about sentences that are resolvable if true.