Because what matters is not whether the universe is computable, but whether our methods of reasoning are computable. Or in other words, whether the map is computable. Solomonoff’s induction is at least as “good” as any computable inference method (up to a constant), regardless of the complexity of the universe. So if you, as a human, are trying to come up with a systematic way to predict things (even uncomputable things), Solomonoff’s induction is better. Here is the precise statement:
The claim that SI is at least as good as human reasoning is a relative claim. If human reasoning is limited or useless in absolute terms, so is SI.
Also , there is no proof that human reasoning would still be computable in a hypercomputational universe. We don’t have separate evidence about the world and the mind. That’s the computational circle.
The claim that SI is at least as good as human reasoning is a relative claim. If human reasoning is limited or useless in absolute terms, so is SI.
Also , there is no proof that human reasoning would still be computable in a hypercomputational universe. We don’t have separate evidence about the world and the mind. That’s the computational circle.