I was thinking in the context of the bayesian theorem. Those articles that describe how evidence updates works, using bayes theorem, never seems to include confidence intervals. Maybe I have just looked the wrong places. I’ll find out once I’ve gone through the links somervta gave me.
If you’re doing one of those simple problems (e.g. cancer test has X false positive rate and Y false negative and the prior rate of cancer is Z) then you’re not getting confidence intervals because you’re assuming that X, Y, and Z are known 100% correctly. If you have confidence intervals to input for X, Y, and Z, you will output confidence intervals as well.
Similarly, 2+2 will always give you 4, without a confidence interval attached. But if you add two numbers with confidence intervals, you’ll get a number with a confidence interval, probably after you make some assumptions about independence or about what your intervals mean.
I was thinking in the context of the bayesian theorem. Those articles that describe how evidence updates works, using bayes theorem, never seems to include confidence intervals. Maybe I have just looked the wrong places. I’ll find out once I’ve gone through the links somervta gave me.
If you’re doing one of those simple problems (e.g. cancer test has X false positive rate and Y false negative and the prior rate of cancer is Z) then you’re not getting confidence intervals because you’re assuming that X, Y, and Z are known 100% correctly. If you have confidence intervals to input for X, Y, and Z, you will output confidence intervals as well.
Similarly, 2+2 will always give you 4, without a confidence interval attached. But if you add two numbers with confidence intervals, you’ll get a number with a confidence interval, probably after you make some assumptions about independence or about what your intervals mean.
May I suggest a book on statistics? I am partial to Larry Wasserman’s “All of Statistics,” but Larry is not a Bayesian.