Interesting, thanks. My intuition is that if you draw a circle of say a dozen (?) meters around the target, there’s no spot within that circle that is more or less likely to be hit than any other, and it’s only outside the circle than you start having something like a normal distribution. I really don’t see why I should think the 35 centimeters on the target’s right is any more (or less) likely than 42 centimeters on his left. Can you think of any good reason why I should think that? (Not saying my intuition is better than yours. I just want to get where I’m wrong if I am.)
Can you think of any good reason why I should think that?
Intuition. Imagine a picture with bright spot in the center, and blur it. The brightest point will still be in center (before rounding pixel values off to the nearest integer, that is; only then may a disk of exactly equiprobable points form).
My answer: because strictly monotonic[1] probability distribution prior to accounting for external factors (either “there might be negligible aiming errors” or “the bullet will fly exactly where needed” are suitable) will remain strictly monotonic when blurred[2] with monotonic kernel[2] formed by those factors (if we assume wind and all that create a normal distribution, it fits).
My answer: because strictly monotonic[1] probability distribution prior to accounting for external factors
Ok so that’s defo what I think assuming no external factors, yes. But if I know that there are external factors, I know the bullet will deviate for sure. I don’t know where but I know it will. And it might luckily deviate a bit back and forth and come back exactly where I aimed, but I don’t get how I can rationally believe that’s any more likely than it doing something else and landing 10 centimeters more on the right. And I feel like what everyone in the comments so far is saying is basically “Well, POI!”, taking it for granted/self-obvious, but afaict, no one has actually justified why we should use POI rather than simply remain radically agnostic on whether the bullet is more likely to hit the target than the kid. I feel like your intuition pump, for example, is implicitly assuming POI and is sort of justifying POI with POI.
But if I know that there are external factors, I know the bullet will deviate for sure. I don’t know where but I know it will.
You assume that blur kernel is non-monotonic, and this is our entire disagreement. I guess that different tasks have different noise structure (for instance, if somehow noise geometrically increased - ±1,±2,…,±2i - we wouldn’t ever return to an exact point we had left).
However, if noise is composed from many i.i.d. small parts, then it has normal distribution which is monotonic in the relevant sense.
I mentioned this in my comment above, but I think it might be worthwhile to differentiate more explicitly between probability distributions and probability density functions. You can have a monotonically-decreasing probability density function F(r) (aka the probability of being in some range is the integral of F(r) over that range, integral over all r values is normalized to 1) and have the expected value of r be as large as you want. That’s because the expected value is the integral of r*F(r), not the value or integral of F(r).
I believe the expected value of r in the stated scenario is large enough that missing is the most likely outcome by far. I am seeing some people argue that the expected distribution is F(r,θ) in a way that is non-uniform in θ, which seems plausible. But I haven’t yet seen anyone give an argument for the claim that the aimed-at point is not the peak of the probability density function, or that we have access to information that allows us to conclude that integrating the density function over the larger-and-aimed-at target region will not give us a higher value than integrating over the smaller-and-not-aimed-at child region
Interesting! I also agree with the superior, but I can see where your intuition might be coming from: if we drop a bouncy ball in the middle of a circle, there will be some bounce to it, and maybe the bounce will always be kinda large, so there might be good reason to think it ending up at rest in the very center is less likely than it ending up off-center. For the sniper’s bullet, however, I think it’s different.
Do you agree with AnthonyC’s view that the bullet’s perturbations are well-modeled by a random walk? If so, maybe I’ll simulate it if I have time and report back—but only makes sense to do that if you agree that the random walk model is appropriate in the first place.
Do you agree with AnthonyC’s view that the bullet’s perturbations are well-modeled by a random walk? If so, maybe I’ll simulate it if I have time and report back—but only makes sense to do that if you agree that the random walk model is appropriate in the first place.
Oh yeah, good question. I’m not sure because random walk models are chaotic and seem to model situations of what Greaves (2016) calls “simple cluelessness”. Here, we’re in a case she would call “complex”. There are systematic reasons to believe the bullet will go right (e.g., the Earth’s rotation, say) and systematic reasons to believe it will go left (e.g., the wind that we see blowing left). The problem is not that it is random/chaotic, but that we are incapable of weighing up the evidence for left vs the evidence for right, incapable to the point where we cannot update away from a radically agnostic prior on whether the bullet will hit the target or the kid.
Oh… wait a minute! I looked up Principal of Indifference, to try and find stronger assertions on when it should or shouldn’t be used, and was surprised to see what it actually means! Wikipedia:
>The principle of indifference states that in the absence of any relevant evidence, agents should distribute their credence (or “degrees of belief”) equally among all the possible outcomes under consideration. In Bayesian probability, this is the simplest non-informative prior.
So I think the superior is wrong to call it “principle of indifference”! You are the one arguing for indifference: “it could hit anywhere in a radius around the targets, and we can’t say more” is POI. “It is more likely to hit the adult you aimed at” is not POI! It’s an argument about the tendency of errors to cancel.
Error cancelling tends to produce Gaussian distributions. POI gives uniform distributions.
I still think I agree with the superior that it’s marginally more likely to hit the target aimed for, but now I disagree with them that this assertion is POI.
So, as you noted in another comment, this depends on your understanding of the nature of the types of errors individual perturbations are likely to induce. I was automatically guessing many small random perturbations that could be approximated by a random walk, under the assumption that any systematic errors are the kind of thing the sniper could at least mostly adjust for even at extreme range. Which I could be easily convinced is completely false in ways I have no ability to concretely anticipate.
That said, whatever assumptions I make about the kinds of errors at play, I am implicitly mapping out some guessed-at probability density function. I can be convinced it skews left or right, down or up. I can be convinced, and already was, that it falls off at a rate such that if I define it in polar coordinates and integrate over theta that the most likely distance-from-targeted-point is some finite nonzero value. (This kind of reasoning comes up sometimes in statistical mechanics, since systems are often not actually at a/the maxentropy state, but instead within some expected phase-space distance of maxentropy, determined by how quickly density of states changes).
But to convince me that the peak of the probability density function is somewhere other than the origin (the intended target), I think I’d have to be given some specific information about the types of error present that the sniper does not have in the scenario, or which the sniper knows but is somehow still unable to adjust for. Lacking such information, then for decision making purposes, other than “You’re almost certainly going to miss” (which I agree with!), it does seem to me that if anyone gets hit, the intended target who also has larger cross-sectional area seems at least a tiny bit more likely.
Interesting, thanks. My intuition is that if you draw a circle of say a dozen (?) meters around the target, there’s no spot within that circle that is more or less likely to be hit than any other, and it’s only outside the circle than you start having something like a normal distribution. I really don’t see why I should think the 35 centimeters on the target’s right is any more (or less) likely than 42 centimeters on his left. Can you think of any good reason why I should think that? (Not saying my intuition is better than yours. I just want to get where I’m wrong if I am.)
Intuition. Imagine a picture with bright spot in the center, and blur it. The brightest point will still be in center (before rounding pixel values off to the nearest integer, that is; only then may a disk of exactly equiprobable points form).
My answer: because strictly monotonic[1] probability distribution prior to accounting for external factors (either “there might be negligible aiming errors” or “the bullet will fly exactly where needed” are suitable) will remain strictly monotonic when blurred[2] with monotonic kernel[2] formed by those factors (if we assume wind and all that create a normal distribution, it fits).
in this case—distribution with any point close to center having higher probability assigned than a point farther
in image processing sense
Ok so that’s defo what I think assuming no external factors, yes. But if I know that there are external factors, I know the bullet will deviate for sure. I don’t know where but I know it will. And it might luckily deviate a bit back and forth and come back exactly where I aimed, but I don’t get how I can rationally believe that’s any more likely than it doing something else and landing 10 centimeters more on the right. And I feel like what everyone in the comments so far is saying is basically “Well, POI!”, taking it for granted/self-obvious, but afaict, no one has actually justified why we should use POI rather than simply remain radically agnostic on whether the bullet is more likely to hit the target than the kid. I feel like your intuition pump, for example, is implicitly assuming POI and is sort of justifying POI with POI.
You assume that blur kernel is non-monotonic, and this is our entire disagreement. I guess that different tasks have different noise structure (for instance, if somehow noise geometrically increased - ±1,±2,…,±2i - we wouldn’t ever return to an exact point we had left).
However, if noise is composed from many i.i.d. small parts, then it has normal distribution which is monotonic in the relevant sense.
I mentioned this in my comment above, but I think it might be worthwhile to differentiate more explicitly between probability distributions and probability density functions. You can have a monotonically-decreasing probability density function F(r) (aka the probability of being in some range is the integral of F(r) over that range, integral over all r values is normalized to 1) and have the expected value of r be as large as you want. That’s because the expected value is the integral of r*F(r), not the value or integral of F(r).
I believe the expected value of r in the stated scenario is large enough that missing is the most likely outcome by far. I am seeing some people argue that the expected distribution is F(r,θ) in a way that is non-uniform in θ, which seems plausible. But I haven’t yet seen anyone give an argument for the claim that the aimed-at point is not the peak of the probability density function, or that we have access to information that allows us to conclude that integrating the density function over the larger-and-aimed-at target region will not give us a higher value than integrating over the smaller-and-not-aimed-at child region
Interesting! I also agree with the superior, but I can see where your intuition might be coming from: if we drop a bouncy ball in the middle of a circle, there will be some bounce to it, and maybe the bounce will always be kinda large, so there might be good reason to think it ending up at rest in the very center is less likely than it ending up off-center. For the sniper’s bullet, however, I think it’s different.
Do you agree with AnthonyC’s view that the bullet’s perturbations are well-modeled by a random walk? If so, maybe I’ll simulate it if I have time and report back—but only makes sense to do that if you agree that the random walk model is appropriate in the first place.
Oh yeah, good question. I’m not sure because random walk models are chaotic and seem to model situations of what Greaves (2016) calls “simple cluelessness”. Here, we’re in a case she would call “complex”. There are systematic reasons to believe the bullet will go right (e.g., the Earth’s rotation, say) and systematic reasons to believe it will go left (e.g., the wind that we see blowing left). The problem is not that it is random/chaotic, but that we are incapable of weighing up the evidence for left vs the evidence for right, incapable to the point where we cannot update away from a radically agnostic prior on whether the bullet will hit the target or the kid.
Oh… wait a minute! I looked up Principal of Indifference, to try and find stronger assertions on when it should or shouldn’t be used, and was surprised to see what it actually means! Wikipedia:
>The principle of indifference states that in the absence of any relevant evidence, agents should distribute their credence (or “degrees of belief”) equally among all the possible outcomes under consideration. In Bayesian probability, this is the simplest non-informative prior.
So I think the superior is wrong to call it “principle of indifference”! You are the one arguing for indifference: “it could hit anywhere in a radius around the targets, and we can’t say more” is POI. “It is more likely to hit the adult you aimed at” is not POI! It’s an argument about the tendency of errors to cancel.
Error cancelling tends to produce Gaussian distributions. POI gives uniform distributions.
I still think I agree with the superior that it’s marginally more likely to hit the target aimed for, but now I disagree with them that this assertion is POI.
So, as you noted in another comment, this depends on your understanding of the nature of the types of errors individual perturbations are likely to induce. I was automatically guessing many small random perturbations that could be approximated by a random walk, under the assumption that any systematic errors are the kind of thing the sniper could at least mostly adjust for even at extreme range. Which I could be easily convinced is completely false in ways I have no ability to concretely anticipate.
That said, whatever assumptions I make about the kinds of errors at play, I am implicitly mapping out some guessed-at probability density function. I can be convinced it skews left or right, down or up. I can be convinced, and already was, that it falls off at a rate such that if I define it in polar coordinates and integrate over theta that the most likely distance-from-targeted-point is some finite nonzero value. (This kind of reasoning comes up sometimes in statistical mechanics, since systems are often not actually at a/the maxentropy state, but instead within some expected phase-space distance of maxentropy, determined by how quickly density of states changes).
But to convince me that the peak of the probability density function is somewhere other than the origin (the intended target), I think I’d have to be given some specific information about the types of error present that the sniper does not have in the scenario, or which the sniper knows but is somehow still unable to adjust for. Lacking such information, then for decision making purposes, other than “You’re almost certainly going to miss” (which I agree with!), it does seem to me that if anyone gets hit, the intended target who also has larger cross-sectional area seems at least a tiny bit more likely.