Interesting! I also agree with the superior, but I can see where your intuition might be coming from: if we drop a bouncy ball in the middle of a circle, there will be some bounce to it, and maybe the bounce will always be kinda large, so there might be good reason to think it ending up at rest in the very center is less likely than it ending up off-center. For the sniper’s bullet, however, I think it’s different.
Do you agree with AnthonyC’s view that the bullet’s perturbations are well-modeled by a random walk? If so, maybe I’ll simulate it if I have time and report back—but only makes sense to do that if you agree that the random walk model is appropriate in the first place.
Do you agree with AnthonyC’s view that the bullet’s perturbations are well-modeled by a random walk? If so, maybe I’ll simulate it if I have time and report back—but only makes sense to do that if you agree that the random walk model is appropriate in the first place.
Oh yeah, good question. I’m not sure because random walk models are chaotic and seem to model situations of what Greaves (2016) calls “simple cluelessness”. Here, we’re in a case she would call “complex”. There are systematic reasons to believe the bullet will go right (e.g., the Earth’s rotation, say) and systematic reasons to believe it will go left (e.g., the wind that we see blowing left). The problem is not that it is random/chaotic, but that we are incapable of weighing up the evidence for left vs the evidence for right, incapable to the point where we cannot update away from a radically agnostic prior on whether the bullet will hit the target or the kid.
Oh… wait a minute! I looked up Principal of Indifference, to try and find stronger assertions on when it should or shouldn’t be used, and was surprised to see what it actually means! Wikipedia:
>The principle of indifference states that in the absence of any relevant evidence, agents should distribute their credence (or “degrees of belief”) equally among all the possible outcomes under consideration. In Bayesian probability, this is the simplest non-informative prior.
So I think the superior is wrong to call it “principle of indifference”! You are the one arguing for indifference: “it could hit anywhere in a radius around the targets, and we can’t say more” is POI. “It is more likely to hit the adult you aimed at” is not POI! It’s an argument about the tendency of errors to cancel.
Error cancelling tends to produce Gaussian distributions. POI gives uniform distributions.
I still think I agree with the superior that it’s marginally more likely to hit the target aimed for, but now I disagree with them that this assertion is POI.
Interesting! I also agree with the superior, but I can see where your intuition might be coming from: if we drop a bouncy ball in the middle of a circle, there will be some bounce to it, and maybe the bounce will always be kinda large, so there might be good reason to think it ending up at rest in the very center is less likely than it ending up off-center. For the sniper’s bullet, however, I think it’s different.
Do you agree with AnthonyC’s view that the bullet’s perturbations are well-modeled by a random walk? If so, maybe I’ll simulate it if I have time and report back—but only makes sense to do that if you agree that the random walk model is appropriate in the first place.
Oh yeah, good question. I’m not sure because random walk models are chaotic and seem to model situations of what Greaves (2016) calls “simple cluelessness”. Here, we’re in a case she would call “complex”. There are systematic reasons to believe the bullet will go right (e.g., the Earth’s rotation, say) and systematic reasons to believe it will go left (e.g., the wind that we see blowing left). The problem is not that it is random/chaotic, but that we are incapable of weighing up the evidence for left vs the evidence for right, incapable to the point where we cannot update away from a radically agnostic prior on whether the bullet will hit the target or the kid.
Oh… wait a minute! I looked up Principal of Indifference, to try and find stronger assertions on when it should or shouldn’t be used, and was surprised to see what it actually means! Wikipedia:
>The principle of indifference states that in the absence of any relevant evidence, agents should distribute their credence (or “degrees of belief”) equally among all the possible outcomes under consideration. In Bayesian probability, this is the simplest non-informative prior.
So I think the superior is wrong to call it “principle of indifference”! You are the one arguing for indifference: “it could hit anywhere in a radius around the targets, and we can’t say more” is POI. “It is more likely to hit the adult you aimed at” is not POI! It’s an argument about the tendency of errors to cancel.
Error cancelling tends to produce Gaussian distributions. POI gives uniform distributions.
I still think I agree with the superior that it’s marginally more likely to hit the target aimed for, but now I disagree with them that this assertion is POI.