Now consider the infinite sum “x1 + x2 + x3…” Since all of these values are positive (and non-zero, since zero is not a probability), either the sum converges to a positive value, or it diverges to positive infinity. In fact, it will converge to a value less than 1, since if we had multiplied each term of the series by the number of hypotheses with the corresponding complexity, it would have converged to exactly 1—because probability theory demands that the sum of all the probabilities of all our mutually exclusive hypotheses should be exactly 1.
x1, x2, … is not the set of probabilities of all our mutually exclusive hypotheses. Each xi is the average of an infinite number of hypothesis to explain some particular set of data. Each xi would thus be zero, just as the probability of a real-valued random number between 0 and 1 comes up exactly 0.5 is zero.
This makes the proof fail. It’s interesting, though. But I don’t think you can save it. Basically, you have an infinite set of mutually-exclusive hypotheses, and you want to label them with an infinite series that sums to 1. But you can’t relate an infinite convergent series to the order of your hypotheses, unless the complexity n of the hypothesis is a term in the formula for its probability—and, if it is, your approach will require that this formula decreases with n. So your approach requires presuming what you’re trying to prove.
I’m also bothered by the idea that hypotheses are mutually exclusive. Having an explanation of complexity n does not prevent having an explanation of complexity n+1. If your proof applies only to a set of hypotheses pre-selected for being mutually exclusive, this set could have a very different distribution than the set of hypotheses that a would-be user of Occam’s razor faces in the wild.
x1, x2, … is not the set of probabilities of all our mutually exclusive hypotheses. Each xi is the average of an infinite number of hypothesis to explain some particular set of data. Each xi would thus be zero, just as the probability of a real-valued random number between 0 and 1 comes up exactly 0.5 is zero.
This makes the proof fail. It’s interesting, though. But I don’t think you can save it. Basically, you have an infinite set of mutually-exclusive hypotheses, and you want to label them with an infinite series that sums to 1. But you can’t relate an infinite convergent series to the order of your hypotheses, unless the complexity n of the hypothesis is a term in the formula for its probability—and, if it is, your approach will require that this formula decreases with n. So your approach requires presuming what you’re trying to prove.
I’m also bothered by the idea that hypotheses are mutually exclusive. Having an explanation of complexity n does not prevent having an explanation of complexity n+1. If your proof applies only to a set of hypotheses pre-selected for being mutually exclusive, this set could have a very different distribution than the set of hypotheses that a would-be user of Occam’s razor faces in the wild.