Integrals of the likelihood function aren’t really meaningful, even if normalized so the integral is one over the whole range. This is because the result depends on the arbitrary choice of parameterization—eg, whether you parameterize a probability by p in [0,1], or by log(p) in [-oo,0]. In Bayesian inference, one always integrates the likelihood only after multiplying by the prior, which can be seen as a specification of how the integration is to be done.
Integrals of the likelihood function aren’t really meaningful, even if normalized so the integral is one over the whole range. This is because the result depends on the arbitrary choice of parameterization—eg, whether you parameterize a probability by p in [0,1], or by log(p) in [-oo,0]. In Bayesian inference, one always integrates the likelihood only after multiplying by the prior, which can be seen as a specification of how the integration is to be done.