An idealized Oracle is equivalent to a universal Turing machine (UTM).
A self-improving Oracle approaches UTM-like behavior in the limit.
What about a (self-improving) token predictor under iteration? It appears Oracle-like, but does it tend toward UTM behavior in the limit, or is it something distinct?
Maybe, just maybe, the model does something that leads it to not be UTM like in the limit, and maybe (very much maybe) that would allow us to imbue it with some desirable properties.
An idealized Oracle is equivalent to a universal Turing machine (UTM).
A self-improving Oracle approaches UTM-like behavior in the limit.
What about a (self-improving) token predictor under iteration? It appears Oracle-like, but does it tend toward UTM behavior in the limit, or is it something distinct?
Maybe, just maybe, the model does something that leads it to not be UTM like in the limit, and maybe (very much maybe) that would allow us to imbue it with some desirable properties.
/end shower thought