So is there then a pragmatic assumption that can be made? Maybe we assume that if I pick a turing machine blindly, without specifically designing it for a particular output string, it’s unlikely to be strongly biased towards that string.
What probability distribution over turing machines do you blindly pick it from? That’s another instance of the same problem.
Pragmatically, if I non-blindly pick some representation of turing machines that looks simple to me (e.g. the one Turing used), I don’t really doubt that it’s within a few thousand bits of the “right” version of solomonoff, whatever that means.
Pragmatically, if I non-blindly pick some representation of turing machines that looks simple to me (e.g. the one Turing used), I don’t really doubt that it’s within a few thousand bits of the “right” version of solomonoff, whatever that means.
So is there then a pragmatic assumption that can be made? Maybe we assume that if I pick a turing machine blindly, without specifically designing it for a particular output string, it’s unlikely to be strongly biased towards that string.
What probability distribution over turing machines do you blindly pick it from? That’s another instance of the same problem.
Pragmatically, if I non-blindly pick some representation of turing machines that looks simple to me (e.g. the one Turing used), I don’t really doubt that it’s within a few thousand bits of the “right” version of solomonoff, whatever that means.
Why not?