Here’s your citation, @Steven Byrnes for the claim that for Turing-computable learners, you can only predict/act on complex sequences by being that complex yourself.
Is there an Elegant Universal Theory of Prediction? Shane Legg (2006):
The immediate corollary is as AI gets better, it’s going to be inevitably more and more complicated by default, and it’s not going to be any easier to interpret AIs, it will just get harder and harder to interpret the learned parts of the AI.
Here’s your citation, @Steven Byrnes for the claim that for Turing-computable learners, you can only predict/act on complex sequences by being that complex yourself.
Is there an Elegant Universal Theory of Prediction? Shane Legg (2006):
https://arxiv.org/abs/cs/0606070
The immediate corollary is as AI gets better, it’s going to be inevitably more and more complicated by default, and it’s not going to be any easier to interpret AIs, it will just get harder and harder to interpret the learned parts of the AI.