How do we determine our “hyper-hyper-hyper-hyper-hyperpriors”? Before updating our priors however many times, is there any way to calculate the probability of something before we have any data to support any conclusion?
In some applications, you can get a base rate through random sampling and go from there.
Otherwise, you’re stuck making something up. The simplest principle is to assume that if there’s no evidence with which to distinguish possibilities, then one should take a uniform distribution (this has obvious drawbacks if the number of possibilities is infinite). Another approach is to impose some kind of complexity penalty, i.e. to have some way of measuring the complexity of a statement and to prefer statements with less complexity.
If you have no data, you can’t have a good way to calculate the probability of something. If you defend a method by saying it works in practice, then you’re using data.
No, practical Bayesian probability starts with an attempt to represent your existing beliefs and make them self-consistent. For a brief post on the more abstract problem, see here.
How do we determine our “hyper-hyper-hyper-hyper-hyperpriors”? Before updating our priors however many times, is there any way to calculate the probability of something before we have any data to support any conclusion?
In some applications, you can get a base rate through random sampling and go from there.
Otherwise, you’re stuck making something up. The simplest principle is to assume that if there’s no evidence with which to distinguish possibilities, then one should take a uniform distribution (this has obvious drawbacks if the number of possibilities is infinite). Another approach is to impose some kind of complexity penalty, i.e. to have some way of measuring the complexity of a statement and to prefer statements with less complexity.
If you have no data, you can’t have a good way to calculate the probability of something. If you defend a method by saying it works in practice, then you’re using data.
Intuition. Trusting our brain to come up with something useful.
mutter mutter something something to do with parsimony/complexity/occam?
No, practical Bayesian probability starts with an attempt to represent your existing beliefs and make them self-consistent. For a brief post on the more abstract problem, see here.