Sure, but you can’t actually hold the probability vector over all states with ravens. So you move up a level and summarize that set of probabilities to a smaller (and less precise) set.
All uncertainty is map, not territory. Anytime you are using probability, you’re acknowledging that you’re a limited calculator that cannot hold the complete state of the universe. If you could, you wouldn’t need probability, you’d actually know the thing.
Meta-models are useful when specific models get cumbersome. Likewise meta-probability.
You don’t need meta-probability to compress priors. For example, a uniform prior on [0,1] talks about an uncountable set of events, but its description is tiny and doesn’t use meta-probabilities.
Sure, but you can’t actually hold the probability vector over all states with ravens. So you move up a level and summarize that set of probabilities to a smaller (and less precise) set.
All uncertainty is map, not territory. Anytime you are using probability, you’re acknowledging that you’re a limited calculator that cannot hold the complete state of the universe. If you could, you wouldn’t need probability, you’d actually know the thing.
Meta-models are useful when specific models get cumbersome. Likewise meta-probability.
You don’t need meta-probability to compress priors. For example, a uniform prior on [0,1] talks about an uncountable set of events, but its description is tiny and doesn’t use meta-probabilities.
And it’s a special case.