[Question] Questions about Solomonoff induction

I am confused about Somolonoff induction and some other related concepts. I appreciate some help from anyone who understands this.

What I understand so far is:

  1. I understand Bayesian statistics and the problem of choosing the priors

  2. I understand that Kolmogorov complexity can be used to measure the complexity of an hypothesis

  3. I think to understand that Somolonoff induction consists of evaluating all the possible hypotheses generating some observed data, and weighting them according to their complextiy

For example, the sequence HTHTT could be generated by different programs:

  1. A program that flips a fair coin

  2. A program that flips a coin with p=0.4

  3. A program that generates “exactly” the sequence HTHTT

  4. … etc. Infinite rules could be written here

I do understand that a program that flips a fair coin is simpler than a program that generates the exact output

What I don’t understand is:

How do you weight exactly complexity against likelihood?

In the previous example, we could say that the first program has complexity X bits while the the third program has complexity Y, bein Y > X (but only a small difference, because we don’t have that many observations). Here is where things get a bit murky for me. The “predictive” power of the first program to explain that specific output, being probabilistic, is 1/​2^5, while for the second program the likelihood is 1. To prefer the first hypothesis we have to assign a prior probability at least 2^5 times higher to the first program than to the second one. Where does this 2^5 factor comes from?

I know that it is a theoretical thing but if you can use coin flips in the explanation, that helps a lot. If you identify other places where I am confused but mistakenly thought I wasn’t, please point that out.