Here’s a question inspired by thinking about Turing RL, and trying to understand what kind of “beliefs about computations” should we expect the agent to acquire.
Does mathematics have finite information content?
First, let’s focus on computable mathematics. At first glance, the answer seems obviously “no”: because of the halting problem, there’s no algorithm (i.e. a Turing machine that always terminates) which can predict the result of every computation. Therefore, you can keep learning new facts about results of computations forever. BUT, maybe most of those new facts are essentially random noise, rather than “meaningful” information?
Is there a difference of principle between “noise” and “meaningful content”? It is not obvious, but the answer is “yes”: in algorithmic statistics there is the notion of “sophistication” which measures how much “non-random” information is contained in some data. In our setting, the question can be operationalized as follows: is it possible to have an algorithm A plus an infinite sequence of bits R, s.t.R is random in some formal sense (e.g. Martin-Lof) and A can decide the output of any finite computation if it’s also given access to R?
The answer to the question above is “yes”! Indeed, Chaitin’s constant is Martin-Lof random. Given access to Chaitin’s constant, it is possible to construct a halting oracle, therefore A can decide whether the computation halts, and if it does, run it (and if doesn’t, output N/A or whatever).
[EDIT: Actually, this is not quite right. The way you use Chaitin’s constant to emulate a halting oracle produces something that’s only guaranteed to halt if you give it the correct Chaitin’s constant.]
But, this is a boring solution. In practice we are interested at efficient methods of answering mathematical questions, and beliefs acquired by resource bounded agents. Hence, the question becomes: given a resource bound B (e.g. a bound on space or time complexity), is it possible to have A and R similar to above, s.t.A respects the bound B and R is pseudorandom in some formal sense w.r.t. the bound B?
[EDIT: I guess that the analogous thing to the unbounded setting would be, A only has to respect B when given the correct R. But the real conclusion is probably that we should look for something else instead, e.g. some kind of infradistribution.]
This is a fun question, because any answer would be fascinating in its own way: either computable mathematics has finite content in some strong formal sense (!) or mathematics is infinitely sophisticated in some formal sense (!)
We can also go in the other direction along the “hierarchy of feasibility”, although I’m not sure how useful is that. Instead of computable mathematics, let’s consider determining the truth (not provability, but actual truth) of sentences in e.g. Peano Arithmetic. Does A and R as above still exist? This would require e.g. a Martin-Lof random sequence which allows making any finite number of Turing jumps.
Wikipedia claims that every sequence is Turing reducible to a random one, giving a positive answer to the non-resource-bounded version of any question of this form. There might be a resource-bounded version of this result as well, but I’m not sure.
Here’s a question inspired by thinking about Turing RL, and trying to understand what kind of “beliefs about computations” should we expect the agent to acquire.
Does mathematics have finite information content?
First, let’s focus on computable mathematics. At first glance, the answer seems obviously “no”: because of the halting problem, there’s no algorithm (i.e. a Turing machine that always terminates) which can predict the result of every computation. Therefore, you can keep learning new facts about results of computations forever. BUT, maybe most of those new facts are essentially random noise, rather than “meaningful” information?
Is there a difference of principle between “noise” and “meaningful content”? It is not obvious, but the answer is “yes”: in algorithmic statistics there is the notion of “sophistication” which measures how much “non-random” information is contained in some data. In our setting, the question can be operationalized as follows: is it possible to have an algorithm A plus an infinite sequence of bits R, s.t.R is random in some formal sense (e.g. Martin-Lof) and A can decide the output of any finite computation if it’s also given access to R?
The answer to the question above is “yes”! Indeed, Chaitin’s constant is Martin-Lof random. Given access to Chaitin’s constant, it is possible to construct a halting oracle, therefore A can decide whether the computation halts, and if it does, run it (and if doesn’t, output N/A or whatever).
[EDIT: Actually, this is not quite right. The way you use Chaitin’s constant to emulate a halting oracle produces something that’s only guaranteed to halt if you give it the correct Chaitin’s constant.]
But, this is a boring solution. In practice we are interested at efficient methods of answering mathematical questions, and beliefs acquired by resource bounded agents. Hence, the question becomes: given a resource bound B (e.g. a bound on space or time complexity), is it possible to have A and R similar to above, s.t.A respects the bound B and R is pseudorandom in some formal sense w.r.t. the bound B?
[EDIT: I guess that the analogous thing to the unbounded setting would be, A only has to respect B when given the correct R. But the real conclusion is probably that we should look for something else instead, e.g. some kind of infradistribution.]
This is a fun question, because any answer would be fascinating in its own way: either computable mathematics has finite content in some strong formal sense (!) or mathematics is infinitely sophisticated in some formal sense (!)
We can also go in the other direction along the “hierarchy of feasibility”, although I’m not sure how useful is that. Instead of computable mathematics, let’s consider determining the truth (not provability, but actual truth) of sentences in e.g. Peano Arithmetic. Does A and R as above still exist? This would require e.g. a Martin-Lof random sequence which allows making any finite number of Turing jumps.
Wikipedia claims that every sequence is Turing reducible to a random one, giving a positive answer to the non-resource-bounded version of any question of this form. There might be a resource-bounded version of this result as well, but I’m not sure.