You don’t need meta-probability to compress priors. For example, a uniform prior on [0,1] talks about an uncountable set of events, but its description is tiny and doesn’t use meta-probabilities.
And it’s a special case.
You don’t need meta-probability to compress priors. For example, a uniform prior on [0,1] talks about an uncountable set of events, but its description is tiny and doesn’t use meta-probabilities.
And it’s a special case.