# Asymptotic Logical Uncertainty: The Benford Test

This is the second post in a series on Asymptotic Logical Uncertainty.

All Turing Machines we discuss will have no input, and will output an infinite (or finite) sequence of bits. We let denote the th bit output by the Turing machine .

Let be the th Ackermann number as defined in Conway and Guy’s “The Book of Numbers.” Let be a simple enumeration of sentences in first order logic over ZFC. Let be a deterministic Turing machine such that if and only if is provable with proof length at most . We will be discussing Turing machines which are designed to try to guess the output of in a very limited time. It will be clear later why we need to be computable, and therefore must define using bounded proofs.

Given a probabilistic Turing machine , and a Time complexity function , we define an infinite collection of dependent random variables for . is defined to be if outputs the first bits in time at most . If does not output bits in time , then is either 0 or 1 uniformly at random. These random variables are dependent, because we only run once to determine for all .

Let be a probabilistic Turing machine designed to try to guess the output of in time at most . Let be the subset of natural numbers defining the sentences “The first digit of in base 10 is 1.” Let be defined to be 1 if is a power of 10, and otherwise. We say that satisfies the Benford test if

The Benford test is a desired property, because we believe that is always true when is a power of 10, and otherwise is pseudorandomly true with probability . Formally, we are making the assumption that for any probabilistic Turing machine , the probability that for all is in .

The number comes from Benford’s Law. Intuitively, we expect that when is not a power of 10, is true of the time, and is too large for there to be any procedure which predicts better than randomly.

The Benford test is extremely easy to pass if you are allowed to hard code into your algorithm. What is more difficult is finding an algorithm which passes the test in a natural way.

The weak Benford test will refer to the version of the test where the algorithm is allowed to know that it will only be judged on its answers to . This means that instead of trying to predict , the algorithm tries to predict the output of , which is defined to agree with on the th bit when and output a 0 for all other bits. The distinction between the Benford test and the weak Benford test is informal. The only difference is changing the bit string that the algorithm is “trying to predict.”

Next, we will present an algorithm which uses a strategy similar to Solomonoff induction to satisfy the weak Benford test.

Charlie, note that I am talking about the first digit of A(n), not the last digit. The parity of A(n) does not matter much. The reason powers of 10 are special is that the A(n) is a power of 10, and must start with a 1.

I personally believe the assumption I am making about S, but if you do not, you can replace it with some other sequence that you believe is pseudorandom.

This afterthought confused me. I spent fifteen minutes trying to figure out why you claim that Ackermann numbers are all powers of 10 and start with 1. I guess you wanted to write something like: »The reason […] is that if n is a power of 10, A(n) must be a power of 10, and start with a 1« Right?

Oh, whoops! Managed to confuse myself. I’m not totally sure about Benford’s law, but am happy to assume for the sake of argument now that I’m not actually deluded.

Double Edit: Hm, the converse of the equidistribution theorem doesn’t hold even close to as strictly as I thought it did. Nevermind me.

(edited)

~~I dunno that I buy that the Benford test is a desideratum at all levels of computational resources (if n is even, A(n) is even), but~~I’m still pretty hyped that you’re posting about logical uncertainty.Actually, now I’m curious about the modulo arithmetic properties of Ackermann’s function.