To go with the coin example. Suppose I need to experimentally investigate the extent to which the coin is fair, calling p the probability of it coming up head. I then have a countable infinity of possible hypotheses, which I may consider all a priori equally likely, p∈[0,1]. I then perform experiments (tossing the coin).
Bayesian updating tells me that my belief in each of the hypotheses after H heads and T tails should be
P(p|H,T)=pH(1−p)TB(H,T)
(B is just the normalization factor, don’t mind it).
Popperian updating tells me something about this: that if I’ve observed even a single tail, then
P(p=1|T>0)=0
and vice versa for observing a single head. It’s something that I also get out of the Bayesian update, but much more limited.
To go with the coin example. Suppose I need to experimentally investigate the extent to which the coin is fair, calling p the probability of it coming up head. I then have a countable infinity of possible hypotheses, which I may consider all a priori equally likely, p∈[0,1]. I then perform experiments (tossing the coin).
Bayesian updating tells me that my belief in each of the hypotheses after H heads and T tails should be
P(p|H,T)=pH(1−p)TB(H,T)
(B is just the normalization factor, don’t mind it).
Popperian updating tells me something about this: that if I’ve observed even a single tail, then
P(p=1|T>0)=0
and vice versa for observing a single head. It’s something that I also get out of the Bayesian update, but much more limited.