A potential problem with using Solomonoff induction as a prior

There’s a problem that has occurred to me that I haven’t seen discussed anywhere: I don’t think people actually wants to assign zero probability to all hypotheses which are not Turing computable. Consider the following hypothetical: we come up with a theory of everything that seems to explain all the laws of physics but there’s a single open parameter (say the fine structure constant). We compute a large number of digits of this constant, and someone notices that when expressed in base 2, the nth digit seems to be 1 iff the nth Turing machine halts on the blank tape for some fairly natural ordering of all Turing machines. If we confirm this for a large number of digits (not necessarily consecutive digits- obviously some of the 0s won’t be confirmable) shouldn’t we consider the hypothesis the digits really are given by this simple but non-computable function? But if our priors assign zero probability to all non-computable hypotheses then this hypothesis must always be stuck with zero probability.

If the universe is finite we could approximate this function with a function which was instead “Halts within K” steps where K is some large number, but intutively this seems like a more complicated hypothesis than the original one.

I’m not sure what is a reasonable prior in this sort of context that handles this sort of thing. We don’t want an uncountable set of priors. It might make sense to use something like hypotheses which are describable in Peano arithmetic or something like that.