Resource-Limited Reflective Oracles

Reflective oracles accurately answer questions about what arbitrary halting probabilistic oracle machines output. It is possible to make a variant of a reflective oracle that accurately answers questions about what sufficiently short-running Turing machines with access to the same oracle output.


These oracles are explicitly computable, fairly powerful in a computational complexity sense, and can probably be used to make a reflective version of AIXItl, (which will not happen in this particular post)

The theorems in this post are not exhaustive at all, just the stuff I was able to figure out in ~2 hours, so there is almost certainly low-hanging fruit in figuring out how these interact with the standard arsenal of theorems in computational complexity theory.

#Motivation:

Let’s say we want to make an oracle that is reflective over all TM’s that run in polynomial time. How do we do that?

The obvious approach is to let a query consist of a tuple of a turing machine , a number , a bitstring , and a probability . If you build the tree of all possible execution paths of the turing machine that are poly() steps long(for some polynomial), and come up with the probability of every possible output (including a probability for timing out) the oracle must return 0 if the true probability of is less than , 1 if the true probability of is greater than , and randomize otherwise.

This obvious approach doesn’t work, because it’s possible to find out whether any halting turing machine, no matter how long-running, returns , by recursive oracle calls. Let be some inputless turing machine that runs for a very long number of turns, like the machine. Let be the turing machine that takes a number as input, simulates for turns, and if halts in turns, outputs what would. If doesn’t halt in turns, this algorithm queries the oracle about whether outputs , and if so, outputs .

It is possible to emulate steps of a turing machine in time, and it is possible to write down in time, and it is possible to write down in constant time, so runs in poly() time. Therefore, the oracle should be accurate about what this turing machine outputs for any given . However, if you look at what this turing machine is doing, it is giving itself an exponentially longer time bound on each oracle call, so it will eventually be able to fully simulate whatever turing machine it wants.

The obvious patch to this is ensuring that, given some monotonically increasing function for the oracle (linear, polynomial, exponential), when calling the oracle again, . You can’t have a chain where the oracles keep recursively calling themselves and giving themselves more computational resources, so all chains of recursive oracle calls have to be bounded above by .

So, the complexity class is the set of all problems solvable in time, with access to an -reflective oracle.

A -reflective oracle (where C is some constant) is an oracle that accurately answers any query posed about a probabilistic turing machine that halts with probability 1 in time or less, and has access to an -reflective oracle.

Pretty much, once is put into the starting algorithm, every algorithm it calls the oracle upon must have runtime and oracle query length bounded by , and all of the algorithms that one calls, and so on. This prevents an unboundedly long chain of different oracle calls to simulate arbitrary turing machines, because eventually you’ll get either a finite set of turing machines all calling each other, and an equilibrium exists (there are only finitely many oracle queries of length f(n)), or the condition of bounded runtime and oracle query length will be violated, and the oracle can return anything at that point.

#Immediate Theorems

Theorem 1: There is some constant c such that, for any function ,

Proof Sketch: We’ll be having a sequence of oracle machines, where each one simulates a finite number of steps, and then passes the intermediate state to the next oracle machine, via the oracle query. The overall result is something like how, if an algorithm can be carried out on a sheet of paper, it can be split up among a long chain of people where everyone does some work on the paper, and passes it to the next person.

Proof: Let be the turing machine , advanced by time steps. Consider the turing machine that emulates for steps, and then queries the oracle about . Because is in , all the intermediate states that are recursively called have length bounded above by , and adding the code to simulate it ahead steps is an increase in length. Furthermore, emulating steps of a turing machine can be done in time, which is a constant.

Therefore, since it takes constant time to emulate the turing machine, and takes time to write down the query to the oracle, none of the oracle queries or turing machines run over time, and once an answer is found, it gets passed back along the chain by the condition where the oracle must be accurate about any algorithm where the runtime stays under the time bound.

Theorem 2: There is some constant such that for any function that is computable in time, .

Any nondeterministic turing machine that implements a function in has a corresponding randomized turing machine that generates a random string of length , and uses that to decide which path to go down. Now consider the turing machine that makes (and returns the answer to) the single oracle query . Pretty much, this asks whether there is any random string that leads to accepting, and it perfectly correlates to whether accepts or rejects. Writing down takes constant time, and writing down takes time, and computing in the first place takes time by assumption.

#Takeaways

This is super-preliminary, there’s certainly more computational complexity results to be found by pushing in this direction. The ability to recursively call algorithms with access to the same oracle is powerful, being able to encompass both space and nondeterminism, but is still a step down from true omniscience. It’s pretty likely that a variant of AIXItl can be constructed with this tool.