I agree with @Pattern’s answer, and would add that these can also be compatible if you have a good gears-level model of what counts as an outcome. (I don’t actually know if typical formulations of the principle of indifference would count such a model, or the mechanism to generate it, as evidence in the relevant sense).
I’m thinking about statistical mechanics, where “energy units are assigned at random among all available degrees of freedom” is a foundational principle that you apply at the fundamental level and go on to use to derive the probabilities of all sorts of not-at-all-equally-likely high-level outcomes (basically, this is the same as saying all poker hands are equally likely to be dealt, but straight flushes are still rarer than a pair).
If you don’t have such a model, then I suspect you’re at risk of falling into “it’s 50-50, either I win the lottery or I don’t” errors.
Also: my instinct is that these two refer to an agent’s states of belief at different times, and combining them is basically “start with a max entropy prior, then update each time you see evidence based on something like Laplace’s rule of succession.” Staying indifferent after enough trials for frequentism to even apply seems like a much bigger error than not starting out indifferent.
I agree with @Pattern’s answer, and would add that these can also be compatible if you have a good gears-level model of what counts as an outcome. (I don’t actually know if typical formulations of the principle of indifference would count such a model, or the mechanism to generate it, as evidence in the relevant sense).
I’m thinking about statistical mechanics, where “energy units are assigned at random among all available degrees of freedom” is a foundational principle that you apply at the fundamental level and go on to use to derive the probabilities of all sorts of not-at-all-equally-likely high-level outcomes (basically, this is the same as saying all poker hands are equally likely to be dealt, but straight flushes are still rarer than a pair).
If you don’t have such a model, then I suspect you’re at risk of falling into “it’s 50-50, either I win the lottery or I don’t” errors.
Also: my instinct is that these two refer to an agent’s states of belief at different times, and combining them is basically “start with a max entropy prior, then update each time you see evidence based on something like Laplace’s rule of succession.” Staying indifferent after enough trials for frequentism to even apply seems like a much bigger error than not starting out indifferent.