See example 2 in the post. I think you can use Rice’s theorem to easily construct hypotheses with hard-to-locate predictors, but I’m not sure about the K-complexity of the resulting predictors.
K-complexity of the program defined by that criterion is about as low as that of the criterion, I’m afraid, so example 2 is invalid (“complexity” that is not K-complexity shouldn’t be relevant). The universal prior for that theory is not astronomically low.
Edit: This is wrong, in particular because the criterion doesn’t present an algorithm for finding the program, and because the program must by definition have high K-complexity.
Um, what? Can you exhibit a low-complexity algorithm that predicts sensory inputs in accordance with the theory from example 2? That’s what it would mean for the universal prior to not be low. Or am I missing something?
See example 2 in the post. I think you can use Rice’s theorem to easily construct hypotheses with hard-to-locate predictors, but I’m not sure about the K-complexity of the resulting predictors.
K-complexity of the program defined by that criterion is about as low as that of the criterion, I’m afraid, so example 2 is invalid (“complexity” that is not K-complexity shouldn’t be relevant). The universal prior for that theory is not astronomically low.
Edit: This is wrong, in particular because the criterion doesn’t present an algorithm for finding the program, and because the program must by definition have high K-complexity.
Um, what? Can you exhibit a low-complexity algorithm that predicts sensory inputs in accordance with the theory from example 2? That’s what it would mean for the universal prior to not be low. Or am I missing something?
You are right, see updated comment.
Yes, forgot about that. So, just crossing the meta-levels once is enough to create a gap in complexity, even if the event only has one element.