I have a (I suspect unusual) tendency to look at basic concepts and try to see them in as many ways as possible. For example, here are seven equations, all of which could be referred to as Bayes’ Theorem:
However, each one is different, and forces a different intuitive understanding of Bayes’ Theorem. The fourth one down is my favourite, as it makes obvious that the update depends only on the ratio of likelihoods. It also gives us our motivation for taking odds, since this clears up the 1/(1+x)ness of the equation.
Because of this way of understanding things, I find explanations easy, because if one method isn’t working, another one will.
ETA: I’d love to see more versions of Bayes’ Theorem, if anyone has any more to post.
Very well said, and doubles as a reply to the last part of my comment here. (When I read your comment in my inbox, I thought it was actually a reply to that one! Needless to say, I my favorite versions of the theorem are the last two you listed.)
I have a (I suspect unusual) tendency to look at basic concepts and try to see them in as many ways as possible. For example, here are seven equations, all of which could be referred to as Bayes’ Theorem:
=\frac{P(E%7CH).P(H)}{P(E)}\\[10]P(H%7CE)=\frac{P(E%7CH)}{P(E)}.P(H)\\[10]P(H%7CE)=\frac{P(E%7CH).P(H)}{P(E%7CH).P(H)+P(E%7C\neg%20\!\,H).P(\neg%20\!\,H)}\\[10]P(H%7CE)=\frac{1}{1+\frac{P(E%7C\neg%20\!\,H).P(\neg%20\!\,H)}{P(E%7CH).P(H)}}\\[10]P(H%7CE)=\frac{P(E%7CH).P(H)}{\sum%20P(E%7CH_i).P(H_i)}\\[10]%0Aodds(H%7CE)=\frac{P(E%7CH)}{P(E%7C\neg%20\!\,H)}.odds(H)\\[10]%0Alogodds(H%7CE)=log(\frac{P(E%7CH)}{P(E%7C\neg%20\!\,H)})+logodds(H))However, each one is different, and forces a different intuitive understanding of Bayes’ Theorem. The fourth one down is my favourite, as it makes obvious that the update depends only on the ratio of likelihoods. It also gives us our motivation for taking odds, since this clears up the 1/(1+x)ness of the equation.
Because of this way of understanding things, I find explanations easy, because if one method isn’t working, another one will.
ETA: I’d love to see more versions of Bayes’ Theorem, if anyone has any more to post.
P (H|E) = P (H and E) / P(E)
which tends to be how conditional probability is defined, and actually the first version of Bayes that I recall seeing.
Very well said, and doubles as a reply to the last part of my comment here. (When I read your comment in my inbox, I thought it was actually a reply to that one! Needless to say, I my favorite versions of the theorem are the last two you listed.)