If F, m and a are true real numbers, its un-computable (can’t code it in TM) and not even considered by Solomonoff induction, so here.
That’s a cop-out, just discretize the relevant variables to deal with integers, Planck units, if you feel like it. The complexity should not depend on the step size.
Well it does depend on the step size. You end up with higher probability for larger discretization constant (and zero probability for no discretization at all), if you use the Turing machine model of computation (and if you use something that does reals you will not face such issue). I’m trying explain that in the ideal limit, it has certain huge shortcomings. The optimality proofs do not imply it is good, they only imply other stuff isn’t ‘everywhere as good and somewhere better’.
The primary use of this sort of thing—highly idealized induction that is uncomputable—is not to do induction but to find limits to induction.
That’s a cop-out, just discretize the relevant variables to deal with integers, Planck units, if you feel like it. The complexity should not depend on the step size.
Well it does depend on the step size. You end up with higher probability for larger discretization constant (and zero probability for no discretization at all), if you use the Turing machine model of computation (and if you use something that does reals you will not face such issue). I’m trying explain that in the ideal limit, it has certain huge shortcomings. The optimality proofs do not imply it is good, they only imply other stuff isn’t ‘everywhere as good and somewhere better’.
The primary use of this sort of thing—highly idealized induction that is uncomputable—is not to do induction but to find limits to induction.