“Okay, so you’re saying the actual hypotheses that predict my observations, which I should assign probability to according to their complexity, are things like ‘T1 and I’m person #1’ or ‘T2 and I’m person #10^10’?” says the Solomonoff inductor.
“Exactly.”
“But I’m still confused. Because it still requires information to say that I’m person #1 or person #10^10. Even if we assume that it’s equally easy to specify where a person is in both theories, it just plain old takes more bits to say 10^10 than it does to say 1.”
I think this section is confused about how the question “T1 or T2” gets encoded for a Solomonoff inductor.
Given the first chunk in the quote above, we don’t have two world models; we have one world model for each person in T1, plus one world model for each person in T2. Our models are (T1 & person 1), (T1 & person 2), …, (T2 & person 1), …. To decide whether we’re in T1 or T2, our Solomonoff inductor will compare the total probability of all the T1 hypotheses to the total probability of all the T2 hypotheses.
Assuming T1 and T2 have exactly the same complexity, then presumably (T1 & person N) should have roughly the same complexity as (T2 & person N). That is not necessarily the case; T1/T2 may contain information which makes encoding some numbers cheaper/more expensive. But it does seem like a reasonable approximation for building intuition.
Anyway, point is, “it just plain old takes more bits to say 10^10 than it does to say 1” isn’t relevant here. There’s no particular reason to compare the two hypotheses (T1 & person 1) vs (T2 & person 10^10); that is not the correct formalization of the T1 vs T2 question.
I’m not really sure what you’re arguing for. Yes, I’ve elided some details of the derivation of average-case complexity of bridging laws (which has gotten me into a few factors of two worth of trouble, as Donald Hobson points out), but it really does boil down to the sort of calculation I sketch in the paragraphs directly after the part you quote. Rather than just saying “ah, here’s where it goes wrong” by quoting the non-numerical exposition, could you explain what conclusions you’re led to instead?
Oh I see. The quoted section seemed confused enough that I didn’t read the following paragraph closely, but the following paragraph had the basically-correct treatment. My apologies; I should have read more carefully.
I think this section is confused about how the question “T1 or T2” gets encoded for a Solomonoff inductor.
Given the first chunk in the quote above, we don’t have two world models; we have one world model for each person in T1, plus one world model for each person in T2. Our models are (T1 & person 1), (T1 & person 2), …, (T2 & person 1), …. To decide whether we’re in T1 or T2, our Solomonoff inductor will compare the total probability of all the T1 hypotheses to the total probability of all the T2 hypotheses.
Assuming T1 and T2 have exactly the same complexity, then presumably (T1 & person N) should have roughly the same complexity as (T2 & person N). That is not necessarily the case; T1/T2 may contain information which makes encoding some numbers cheaper/more expensive. But it does seem like a reasonable approximation for building intuition.
Anyway, point is, “it just plain old takes more bits to say 10^10 than it does to say 1” isn’t relevant here. There’s no particular reason to compare the two hypotheses (T1 & person 1) vs (T2 & person 10^10); that is not the correct formalization of the T1 vs T2 question.
I’m not really sure what you’re arguing for. Yes, I’ve elided some details of the derivation of average-case complexity of bridging laws (which has gotten me into a few factors of two worth of trouble, as Donald Hobson points out), but it really does boil down to the sort of calculation I sketch in the paragraphs directly after the part you quote. Rather than just saying “ah, here’s where it goes wrong” by quoting the non-numerical exposition, could you explain what conclusions you’re led to instead?
Oh I see. The quoted section seemed confused enough that I didn’t read the following paragraph closely, but the following paragraph had the basically-correct treatment. My apologies; I should have read more carefully.