Logical uncertainty, kind of. A proposal, at least.

If you want context and are fast at seeing the implications of math, see Benja’s post. This post is much lighter on the math, though it may take more background reading and more laborious interpolation, since it’s, well, lighter on the math.

Imagine I introduced my pet robot to a game. The robot has 10 seconds to pick a digit, and if the trillionth prime number ends with that digit, the robot gets a cookie (it likes peanut butter cookies the best). 10 seconds is not enough time for my robot to calculate the answer deductively. And yet, guessing an answer is superior to running out of time quietly. What sort of general logic should my robot follow in under 10 seconds to figure out that it should be indifferent between answering 1, 3, 7 or 9? Does it even make sense to be indifferent between the real answer and an impossible answer, even if you don’t know which is which?

As you might expect from context, the proposed solution will involve assigning every true or false math statement a probabability-esque degree of plausibility, with numbers other than 0 or 1 indicating logical uncertainty. Why is this a good idea?

To explain logical uncertainty, let’s first take a step back and reframe logical certainty in terms of rules for reasoning that apply to both deductive logic and probabilistic logic. An important resource here is E.T. Jaynes’ Probability Theory (pdf) - the most relevant part being page 31 of the book. The key idea is that each of the probability axioms applies just fine no matter what kind of Boolean statement you want to find the probability of. Which is to say probability already applies to arithmetic—the laws of probability are also laws of arithmetic, just in the limit that probabilities go to 1 or 0. Our robot starts with a collection of definitions labeled with probability 1 (like “0 is a number” or “S(0)+0=S(0)” [if this S(0) stuff needs context, see wikipedia]), and then applies deductive rules according to the universal rules of probability. We translate “A implies B” into the language of probabilities as P(AB|C) = P(A|C), and then apply the always-true product rule P(B|AC)=P(AB|C) /​ P(A|C). If P(A|C)=1, that is, A|C is deductively true, and A implies B, then P(B|AC)=P(B|C)=1. The machinery that underlies deduction is in fact the same machinery that underlies probabilistic reasoning. And we’re just going to exploit that a little.

An alternate axiomatization due to Savage (hat tip to articles by Sniffoy and fool) is based just on actions—it doesn’t seem necessary for every agent to store numerical plausibilities, but every agent has to act, and if our agent is to act as if it had consistent preferences when presented with bets, it must act as if it calculated probabilities. Just like the conditions of Cox’s theorem as used by E.T. Jaynes, the conditions of Savage’s theorem apply to bets on arithmetic just fine. So our robot always behaves as if it assigns some probabilities over the last digit of the trillionth prime number—it’s just that when our robot’s allowed to run long enough, all but one of those probabilities is 0.

So how do we take the basic laws of belief-manipulation, like the product rule or the sum rule, and apply them to cases where we run out of time and can’t deduce all the things? If we still want to take actions, we still want to assign probabilities, but we can’t use deduction more than a set number of times...

Okay fine I’ll just say it. The proposal outlined here is to treat a computationally limited agent’s “correct beliefs” as the correct beliefs of a computationally unlimited agent with a limited definition of what deduction can do. So this weakened-deduction agent has a limitation, in that starting from axioms it can only prove some small pool of theorems, but it’s unlimited in that it can take the pool of proven theorems, and then assign probabilities to all the unproven true or false statements. After we flesh out this agent, we can find a computationally limited algorithm that finds correct (i.e. equal to the ones from a sentence ago) probabilities for specific statements, rather than all of them. And finally, we have to take this and make a decision procedure—our robot. After all, it’s no good for our robot to assign probabilities if it proceeds to get stuck because it tries to compare the utilities of the world if the end of the trillionth prime number were 1 versus 7 and doesn’t even know what it means to calculate the utility of the impossible. We have to make a bit of a modification to the whole decision procedure, we can’t just throw in probabilities and expect utility to keep up.

So, formally, what’s going on when we limit deduction? Well, remember the process of deduction outlined earlier?

We translate “A implies B” into the language of probabilities as P(AB|C) = P(A|C), and then apply the always-true product rule P(B|AC)=P(AB|C) /​ P(A|C). If P(A|C)=1, that is, A|C is deductively true, and A implies B, then P(B|AC)=P(B|C)=1

There is a chain here, and if we want to limit deduction to some small pool of provable theorems, we need one of the links to be broken outside that pool. As implied, I don’t want to mess with the product rule, or else we violate a desideratum of belief. Instead, we’ll mess with implication itself—we translate “A implies B” into “P(AB|C)=P(A|C) only if we’ve spent less than 10 seconds doing deduction.” Or “P(AB|C)=P(A|C) only if it’s been less than 106 steps from the basic axioms.” These limitations are ugly and nonlocal because they represent the intrusion of nonlocal limitations on our agent into a system that previously ran forever.

Note that the weakening of implication does not necessarily determine the shape of our pool of deduced theorems. A weakened-deduction agent could spiral outward from shortest to longest theorems, or it could search more cleverly to advance on some specific theorems before time runs out.

If a weakened-deduction agent just had the product rule and this new way of translating the axioms into probabilities, it would accumulate some pool of known probabilities—it could work out from the probability-1 axioms to show that some short statements had probability 1 and some other short statements had probability 0. It could also prove some more abstract things like P(AB)=0 without proving anything else about A or B, as long as it followed the right search pattern. But it can’t assign probabilities outside of deduction—it doesn’t have the rules. So it just ends up with a pool of deduced stuff in the middle of a blank plain of “undefined.”

Okay, back to referring to E.T. Jaynes (specifically, the bottom of page 32). When deriving the laws of probability from Cox’s desiderata, the axioms fall into different groups—there are the “laws of thought” parts, and the “interface” parts. The laws of thought are things like Bayes’ theorem, or the product rule. They tell you how probabilities have to fit with other probabilities. But they don’t give you probabilities ex nihilo, you have to start with probability-1 axioms or known probabilities and build out from them. The parts that tell you how to get new probabilities are the interface parts, ideas like “if you have equivalent information about two things, they should have the same probability.”

So what does our limited-deduction agent do once it reaches its limits of deduction? Well, to put it simply, it uses deduction as much as it can, and then it uses the principle of maximum entropy for the probability of everything else. Maximum entropy corresponds to minimum information, so it satisfies a desideratum like “don’t make stuff up.”

The agent is assigning probabilities to true or false logical statements, statements like S(0)+S(0)=S(S(0)). If it had an unrestricted translation of “A implies B,” it could prove this statement quickly. But suppose it can’t. Then this statement is really just a string of symbols. The agent no longer “understands” the symbols, which is to say it can only use facts about the probability of these symbols that were previously proved and are within the pool of theorems—it’s only a part of an algorithm, and doesn’t have the resources to prove everything, so we have to design the agent to assign probabilities based just on what it proved deductively.

So the design of our unlimited-computation, limited-deduction agent is that it does all the deduction it can according to some search algorithm and within some limit, and this can be specified to take any amount of time. Then, to fill up the infinity of un-deduced probabilities, the agent just assigns the maximum-entropy probability distribution consistent with what’s proven. For clever search strategies that figure out things like P(AB)=0 without figuring out P(A), doing this assignment requires interpretation of AND, OR, and NOT operations—that is, we still need a Boolean algebra for statements. But our robot no longer proves new statements about probabilities of these symbol strings, in the sense that P(S(0)+0=S(0))=P(S(0)+S(0)=S(S(0))) is a new statement. An example of a non-new statement would be P(S(0)+0=S(0)) AND S(0)+S(0)=S(S(0))) = P(S(0)+0=S(0)) * P(S(0)+S(0)=S(S(0)) | S(0)+0=S(0)) - that’s just the product rule, it hasn’t actually changed any of the equations.

End of part 1 exercise: Can deducing an additional theorem lead to our agent assigning less probability to the right answer under certain situations? (Reading part 2 may help)

Okay, now on to doing this with actual bounded resources. And back to the trillionth prime number! You almost forgot about that, didn’t you. The plan is to break up the strict deduction → max entropy procedure, and do it in such a way that our robot can get better results (higher probability to the correct answer) the longer it runs, up to proving the actual correct answer. It starts with no theorems, and figures out the max entropy probability distribution for the end of the trillionth prime number. Said distribution happens to be one-half to everything, e.g. p(1)=1/​2 and p(2)=1/​2 and p(3)=1/​2. The robot doesn’t know yet that the different answers are mutually exclusive and exhaustive, much less what’s wrong with the answer of 2. But the important thing is, assigning the same number to everything of interest is fast. Later, as it proves relevant theorems, the robot updates the probability distribution, and when it runs out of resources it stops.

Side note: there’s also another way of imagining how the robot stores probabilities, used in Benja’s post, which is to construct a really big mutually exclusive and exhaustive basis (called “disjunctive normal form”). Instead of storing P(A) and P(B), which are not necessarily mutually exclusive or exhaustive, we store P(AB), P(A¬B) (the hook thingy means “NOT”), P(¬AB), and P(¬A¬B), which are mutually exclusive and exhaustive. These things would then each have probability 14, or 12N, where N is the number of statements you’re assigning probabilities to. This is a pain when N goes to infinity, but can be useful when N is approximately the number of possible last digits of a number.

Back on track: suppose the first thing the robot proves about the last digit of the trillionth prime number is that answers of 1, 2, 3, 4, 5, 6, 7, 8, 9, and 0 are exhaustive. What does that do to the probabilities? In disjunctive normal form, the change is clear—exhaustiveness means that P(¬1¬2¬3¬4¬5¬6¬7¬8¬9¬0)=0, there’s no leftover space. Previously there were 210=1024 of these disjunctive possibilities, now there are 1023, and the remaining ones stay equivalent in terms of what’s been proven about them (nothing), so the probability of each went from 1/​1024 to 1/​1023. Two things to note: figuring this out took a small amount of work and is totally doable for the robot, but we don’t want to do this work every time we use modus tollens, so we need to have some way to tell whether our new theorem matters to the trillionth prime number.

For example, image we were interested in the statement A. The example is to learn that A, B, and C are mutually exclusive and exhaustive, step by step. First, we could prove that A, B, C are exhaustive—P(¬A¬B¬C)=0. Does this change P(A)? Yes, it changes from 48 (N is 3, so 23=8) to 47. Then we learn that P(AB)=0, i.e. A and B are mutually exclusive. This leaves us only A¬BC, ¬ABC, A¬B¬C, ¬AB¬C, and ¬A¬BC. P(A) is now 25. Now we learn that A and C are mutually exclusive, so the possibilities are ¬ABC, A¬B¬C, ¬AB¬C, and ¬A¬BC. P(A)=1/​4. Each of the steps until now have had the statement A right there inside the parentheses—but for the last step, we show that B and C are mutually exclusive, P(BC)=0, and now we just have P(A)=P(B)=P(C)=1/​3. We just took a step that didn’t mention A, but it changed the probability of A. This is because we’d previously disrupted the balance between ABC and ¬ABC. To tell when to update P(A) we not only need to listen for A to be mentioned, we have to track what A has been entangled with, and what’s been entangled with that, and so on in a web of deduced relationships.

The good news is that that’s it. The plausibility assigned to any statement A by this finite-computation method is the same plausibility that our computationally-unlimited deductively-limited agent would have assigned to it, given the same pool of deduced theorems. The difference is just that the limited-deduction agent did this for every possible statement, which as mentioned doesn’t make as much sense in disjunctive normal form.

So IF we accept that having limited resources is like having a limited ability to do implication, THEN we know how our robot should assign probabilities to a few statements of interest. It should start with the good old “everything gets probability 12,” which should allow it to win some cookies even if it only has a few milliseconds, and then it should start proving theorems, updating its probabilities when it proves something that should impact those probabilities.

Now onto the last part. The robot’s utility function wasn’t really designed for U(last digit of trillionth prime number is 1), so what should it do? Well, what does our robot like? It likes having a cookie over not having a cookie. C is for cookie, and that’s good enough for it. So we want to transform a utility over cookies into a an expected utility that will let us order possible actions.

We have to make the exact same transformation in the case of ordinary probabilities, so let’s examine that. If I flip a coin and get a cookie if I call it correctly, I don’t have a terminal U(heads) or U(tails), I just have U(cookie). My expected utility of different guesses comes from not knowing which guess leads to the cookie.

Similarly, the expected utility of different guesses when betting on the trillionth prime number comes from not knowing which guess leads to the cookie. It is possible to care about the properties of math, or to care about whether coins land heads or tails, but that just means we have to drag in causality—your guess doesn’t affect how math works, or flip coins over.

So the standard procedure for our robot looks like this:

Start with some utility function U over the world, specifically cookies.

Now, face a problem. This problem will have some outcomes (possible numbers of cookies), some options (that is, strategies to follow, like choosing one of 10 possible digits), and any amount of information about how options correspond to outcomes (like “iff the trillionth prime ends with this digit, you get the cookie”).

Now our robot calculates the limited-resources probability of getting different outcomes given different strategies, and from that calculates an expected utility for each strategy.

Our robot then follows one of the strategies with maximum expected utility.

Bonus exercises: Does this procedure already handle probabilistic maps from the options to the outcomes, like in the case of the flipped coin? How about if flipping a coin isn’t already converted into a probability, but is left as an underdetermined problem a la “a coin (heads XOR tails) is flipped, choose one.”