Suppose you’re Sherlock Holmes investigating a case in which a red hair was left at the scene of the crime.
The Scotland Yard detective says, “Aha! Then it’s Miss Scarlet. She has red hair, so if she was the murderer she almost certainly would have left a red hair there. P(redhair∣Scarlet)=99%, let’s say, which is a near-certain conviction, so we’re done.”
“But no,” replies Sherlock Holmes. “You see, but you do not correctly track the meaning of the conditional probabilities, detective. The knowledge we require for a conviction is not P(redhair∣Scarlet), the chance that Miss Scarlet would leave a red hair, but rather P(Scarlet∣redhair), the chance that this red hair was left by Scarlet. There are other people in this city who have red hair.”
“So you’re saying...” the detective said slowly, “that P(redhair∣Scarlet) is actually much lower than 1?”
“No, detective. I am saying that just because P(redhair∣Scarlet) is high does not imply that P(Scarlet∣redhair) is high. It is the latter probability in which we are interested—the degree to which, knowing that a red hair was left at the scene, we infer that Miss Scarlet was the murderer. This is not the same quantity as the degree to which, assuming Miss Scarlet was the murderer, we would guess that she might leave a red hair.”
“But surely,” said the detective, “these two probabilities cannot be entirely unrelated?”
“Ah, well, for that, you must read up on Bayes’ rule.”
Update: Confusion of the Inverse
Is there an aphorism regarding the mistake P(E|H) = P(H|E) ?
Suggestions:
Thou shalt not reverse thy probabilities!
Thou shalt not mix up thy probabilities!
Thou shalt not invert thy probabilities! -- Based on Confusion of the Inverse
Arbital conditional probability
Don’t confuse probabilities and likelihoods?
I’m looking for something simpler that doesn’t require understanding another concept besides probability.
The article you posted is a bit confusing:
So is Arbital:
Fixed: the “Miss Scarlett” hypothesis assigns a probability of 20% to e