Also, he apparently updates by a Bayes factor of 8.5 the first time his helmet gets hit, and 33 the second time. I’m not sure how to justify that.
Suppose you start out 85% confident that the one remaining enemy soldier is not a sniper. That leaves only 15% credence to the hypothesis that he is a sniper. But then, a bullet glances off your helmet — an event far more likely if the enemy soldier is a sniper than if he is not. So now you’re only 40% confident he’s a non-sniper, and 60% confident he is a sniper. Another bullet glances off your helmet, and you update again. Now you’re only 2% confident he’s a non-sniper, and 98% confident he is a sniper.
I’m fairly certain that that’s because of an approximation of the coincidence factor. After the first shot, which was totally individual and specifically aimed at the helmet, the odds of it being a sniper suddenly shoot right up, but there’s still the odds of a coincidence. Once it happens twice, you end up multiplying the odds of a coincidence and they become ridiculously low.
The thing isn’t that evidence (bullet hits helmet at X distance) is applied twice, but that the second piece of evidence is actually (hit a helmet at X distance twice in a row), which is radically more unlikely if non-sniper, and radically more likely if sniper. The probabilities within the evidence and priors end up multiplying somewhere, which is how you arrive at the conclusion that it’s 98% likely that the remaining enemy is a sniper.
Of course, I’m only myself trying to rationalize and guess at the OP’s intent. I could also very well be wrong in my understanding of how this is supposed to work.
That doesn’t work. If the probability of a regular soldier being able to hit the helmet once is p, then conditional on him hitting the helmet the first time, the probability that he can hit it again is still p. Your argument is the Gambler’s fallacy.
(Offtopic: Whoa, what’s with the instant downvote? That was a valid point he was making IMO.)
While it is indeed the Gambler’s fallacy to believe it to be less than p-likely that he hits me on the second try, I was initially .85 confident that he was non-sniper, which went to .4 as soon as the helmet was hit once. If that was a non-sniper, there was p chances that he hit me, but if it was a sniper, there was s chances that he did.
Once the helmet is hit twice, there is now p^2 odds of this same exact event (helmet-hit-twice) happening, even if the second shot was only p-likely to occur regardless of the result of the first shot. By comparison, the odds that s^2 happens are not nearly as low, which makes the difference between the two possible events greater than the difference between the two possibilities when (helmet-hit-once), hence the greater update in beliefs.
Again, I’m just trying to rationalize on the assumption that the OP intended this to be accurate. There might well be something the author didn’t mention, or it could just be a rough approximation and no actual math/bayesian reasoning went into formulating the example, which in my opinion would be fairly excusable considering the usefulness of this article already.
If you update by a ratio s/p for one hit on the helmet, you should update by s^2/p^2 for two hits, which looks just like updating by s/p twice, since updating is just like multiplying by the Bayes factor.
Hmm. I’ll have to learn a bit more about the actual theory and math behind Bayes’ theorem before I can really go deeper than this in my analysis without spouting out stuff that I don’t even know if it’s true. My intuitive understanding is that there’s a mathematical process that would perfectly explain the discrepancy with minimal unnecessary assumptions, but that’s just intuition.
For now, I’ll simply update my beliefs according to the evidence you’re giving me / confidence that you’re right vs confidence (or lack thereof) in my own understanding of the situation.
The interesting bit is that the helmet was hit twice, so we’re looking at the probability of being shot twice, not the probability of being shot the second time conditional on being shot the first time.
Also, he apparently updates by a Bayes factor of 8.5 the first time his helmet gets hit, and 33 the second time. I’m not sure how to justify that.
I’m fairly certain that that’s because of an approximation of the coincidence factor. After the first shot, which was totally individual and specifically aimed at the helmet, the odds of it being a sniper suddenly shoot right up, but there’s still the odds of a coincidence. Once it happens twice, you end up multiplying the odds of a coincidence and they become ridiculously low.
The thing isn’t that evidence (bullet hits helmet at X distance) is applied twice, but that the second piece of evidence is actually (hit a helmet at X distance twice in a row), which is radically more unlikely if non-sniper, and radically more likely if sniper. The probabilities within the evidence and priors end up multiplying somewhere, which is how you arrive at the conclusion that it’s 98% likely that the remaining enemy is a sniper.
Of course, I’m only myself trying to rationalize and guess at the OP’s intent. I could also very well be wrong in my understanding of how this is supposed to work.
That doesn’t work. If the probability of a regular soldier being able to hit the helmet once is p, then conditional on him hitting the helmet the first time, the probability that he can hit it again is still p. Your argument is the Gambler’s fallacy.
(Offtopic: Whoa, what’s with the instant downvote? That was a valid point he was making IMO.)
While it is indeed the Gambler’s fallacy to believe it to be less than p-likely that he hits me on the second try, I was initially .85 confident that he was non-sniper, which went to .4 as soon as the helmet was hit once. If that was a non-sniper, there was p chances that he hit me, but if it was a sniper, there was s chances that he did.
Once the helmet is hit twice, there is now p^2 odds of this same exact event (helmet-hit-twice) happening, even if the second shot was only p-likely to occur regardless of the result of the first shot. By comparison, the odds that s^2 happens are not nearly as low, which makes the difference between the two possible events greater than the difference between the two possibilities when (helmet-hit-once), hence the greater update in beliefs.
Again, I’m just trying to rationalize on the assumption that the OP intended this to be accurate. There might well be something the author didn’t mention, or it could just be a rough approximation and no actual math/bayesian reasoning went into formulating the example, which in my opinion would be fairly excusable considering the usefulness of this article already.
If you update by a ratio s/p for one hit on the helmet, you should update by s^2/p^2 for two hits, which looks just like updating by s/p twice, since updating is just like multiplying by the Bayes factor.
Hmm. I’ll have to learn a bit more about the actual theory and math behind Bayes’ theorem before I can really go deeper than this in my analysis without spouting out stuff that I don’t even know if it’s true. My intuitive understanding is that there’s a mathematical process that would perfectly explain the discrepancy with minimal unnecessary assumptions, but that’s just intuition.
For now, I’ll simply update my beliefs according to the evidence you’re giving me / confidence that you’re right vs confidence (or lack thereof) in my own understanding of the situation.
The interesting bit is that the helmet was hit twice, so we’re looking at the probability of being shot twice, not the probability of being shot the second time conditional on being shot the first time.
In retrospect, my first attempt at explanation was fairly poor. Is this clearer?
Edit: To more specifically address your objection: P(hit twice) = P(hit 1st time) * P(hit 2nd time | hit 1st time) = P(hit)^2.
yes