Yeah, I discovered that part on accident at one point because I used the binomial distribution equation in a situation where it didn’t really apply, but still got the right answer.
I would think the most natural way to write a likelihood function would be to divide by the integral from 0 to 1, so that the total area under the curve is 1. That way the integral from a to b gives the probability the hypothesis assigns to receiving a result between a and b. But all that really matters is the ratios, which stay the same even without that.
Integrals of the likelihood function aren’t really meaningful, even if normalized so the integral is one over the whole range. This is because the result depends on the arbitrary choice of parameterization—eg, whether you parameterize a probability by p in [0,1], or by log(p) in [-oo,0]. In Bayesian inference, one always integrates the likelihood only after multiplying by the prior, which can be seen as a specification of how the integration is to be done.
Yeah, I discovered that part on accident at one point because I used the binomial distribution equation in a situation where it didn’t really apply, but still got the right answer.
I would think the most natural way to write a likelihood function would be to divide by the integral from 0 to 1, so that the total area under the curve is 1. That way the integral from a to b gives the probability the hypothesis assigns to receiving a result between a and b. But all that really matters is the ratios, which stay the same even without that.
Integrals of the likelihood function aren’t really meaningful, even if normalized so the integral is one over the whole range. This is because the result depends on the arbitrary choice of parameterization—eg, whether you parameterize a probability by p in [0,1], or by log(p) in [-oo,0]. In Bayesian inference, one always integrates the likelihood only after multiplying by the prior, which can be seen as a specification of how the integration is to be done.