What you do is you pick a turing machine that you think halts eventually, but takes a really long time to do so, and then expend the resource when that TM stops.
After all, a computer with an N bit program can’t delay longer than BB(N). In practice, if most of the bits aren’t logically correlated with long running turing machines, then it can’t run nearly that long. This inevitably becomes a game of logical uncertainty about which turing machines halt.
Any number that the AI puts out must be computable, and I was reminded of an entry in a largest computable number contest, that was “The runtime of the longest running turing machine that ZFC + large cardinality axiom can prove halts (With the proof being at most 3^^^3 symbols long). This is an almost optimal answer in that it is well defined if Con(ZFC + large cardinality axiom) and it beats any answer that you can give that relies only on ZFC+large cardinality axiom.
An AI asked to output the largest number it can, is playing a game of name the largest computable number.
What you do is you pick a turing machine that you think halts eventually, but takes a really long time to do so, and then expend the resource when that TM stops.
After all, a computer with an N bit program can’t delay longer than BB(N). In practice, if most of the bits aren’t logically correlated with long running turing machines, then it can’t run nearly that long. This inevitably becomes a game of logical uncertainty about which turing machines halt.
I don’t understand the significance of using a TM—is this any different from just applying some probability distribution over the set of actions?
Any number that the AI puts out must be computable, and I was reminded of an entry in a largest computable number contest, that was “The runtime of the longest running turing machine that ZFC + large cardinality axiom can prove halts (With the proof being at most 3^^^3 symbols long). This is an almost optimal answer in that it is well defined if Con(ZFC + large cardinality axiom) and it beats any answer that you can give that relies only on ZFC+large cardinality axiom.
An AI asked to output the largest number it can, is playing a game of name the largest computable number.
Wait, but can’t the AI also choose to adopt the strategy “build another computer with a larger largest computable number”?
If the computer has a finite amount of memory and can’t build more, this puts a 2n bound on how long it can weight. If it can build more, it will.
The point is that it needs to pick some long running computation that it can be fairly sure halts eventually.
This gets into details about exactly how the AI is handling logical uncertainty.