I think the principle is fine when applied to how variables effect movement of the bullet in space. I don’t necessarily think it means taking the shot is the right call, tactically.
Note: I’ve never fired a real gun in any context, so a lot of the specifics of my reasoning are probably wrong but here goes anyway.
Essentially I see the POI as stating that the bullet takes a 2D random walk with unknown step sizes (though possibly with a known distribution of sizes) on its way to the target. As distance increase, variance in the random walk increases.
Given typical bullet speeds we’re talking about >6 seconds to reach the target, possibly much more depending on drag. And the sniper is actually pointing the rifle so it follows a parabolic arc to the target. In that time the bullet falls .5*g*t^2 meters, so the sniper is actually pointing at a spot at least 180m above his head, possibly 500m if drag makes the expected flight speed even a few seconds longer. More still to the extent the distance means the angle of the shot has to be so high you need to account for more vertical and less horizontal velocity plus more drag. Triple the distance means a lot more than 3x the number of opportunities for random factors to throw off the shot. The random factors are playing plinko with your bullet.
After the first mile (limit of known skill) the expected distance of where the bullet lands from the target increases. At some sufficiently far distance, it is essentially landing in a random spot in a normal distribution around the intended target, and whether it hits the terrorist or the child is mostly a function of how much area each takes up (the adult is larger), and the extent to which one is blocking the other (not stated). Regardless, when the variance is high enough, the most likely outcome is “neither.” It hits the ground meters away, and now the terrorists all know you took the shot and from what direction. If the terrorist just started walking, there might be a better chance of hitting the building than anything else.
So, in a strict probabilistic sense, yes, your probability of hitting the terrorist is still higher than hitting the child. If that is the superior’s sole criterion for decision making, they’ve reasoned correctly. That is not the sniper’s decision making threshold (he wants a higher certainty of avoiding the child). I would expect it is also not the higher-ups’ sole criterion, since it is most likely going to fail and alert the enemy about their position as well as the limits of their sniping capabilities. I have no idea whether the sniper has any legally-defensible way to refuse to follow the order, but if he carries it out, I don’t think issuing the order will reflect well on the superior.
That said: In practice a lot would depend on why the heck they stationed a sniper three miles away from a target with no practical way to turn that positioning into achieving this goal. The first thought I have is that either you’re not where you’re supposed to be, or the people that ordered you there are idiots, and in either case your superior is flailing trying to salvage the situation. The second is that you’re not really expected to take out this target at this range, and your superior is either trying to show off to his superior, or misunderstood his orders, or wasn’t told the real reason for the orders. The third is that of course the sniper would have brought this up hours ago and already figured out the correct decision tree, it’s crazy this conversation is only happening after the terrorist leaves the building.
At some sufficiently far distance, it is essentially landing in a random spot in a normal distribution around the intended target
Say I tell you the bullet landed either 35 centimeters on the target’s right or 42 centimeters on his left, and ask you to bet on which one you think it is. Are you indifferent/agnostic or do you favor 35 very (very very very very) slightly? (If the former, you reject the POI. If the latter, you embrace it. Or at least that’s my understanding. If you don’t find it more likely the bullet hits a spot a bit closer to the target, than you don’t agree with the superior that aiming at the target makes you more likely to hit him over the child, all else equal.)
The latter. And yes, I do agree with the superior on that specific, narrow mathematical question. If I am trying to run with the spirit and letter of the dilemma as presented, then I will bite that bullet (sorry, I couldn’t resist).
In real world situations, at the point where you somehow find yourself in such a position, the correct solution is probably “call in air support and bomb them instead, or find a way to fire many bullets at once- you’ve already decided you’re willing to kill a child for a chance to to take out the target.”
Similarly, if the terrorist were an unfriendly ASI and the child was the entire population of my home country, and there was knowably no one else in position to take any shot at all, I’d (hope I’m the kind of person who would) take the shot. A coin flip is better than certainty of death, even if it were biased against you quite heavily.
Interesting, thanks. My intuition is that if you draw a circle of say a dozen (?) meters around the target, there’s no spot within that circle that is more or less likely to be hit than any other, and it’s only outside the circle than you start having something like a normal distribution. I really don’t see why I should think the 35 centimeters on the target’s right is any more (or less) likely than 42 centimeters on his left. Can you think of any good reason why I should think that? (Not saying my intuition is better than yours. I just want to get where I’m wrong if I am.)
Can you think of any good reason why I should think that?
Intuition. Imagine a picture with bright spot in the center, and blur it. The brightest point will still be in center (before rounding pixel values off to the nearest integer, that is; only then may a disk of exactly equiprobable points form).
My answer: because strictly monotonic[1] probability distribution prior to accounting for external factors (either “there might be negligible aiming errors” or “the bullet will fly exactly where needed” are suitable) will remain strictly monotonic when blurred[2] with monotonic kernel[2] formed by those factors (if we assume wind and all that create a normal distribution, it fits).
My answer: because strictly monotonic[1] probability distribution prior to accounting for external factors
Ok so that’s defo what I think assuming no external factors, yes. But if I know that there are external factors, I know the bullet will deviate for sure. I don’t know where but I know it will. And it might luckily deviate a bit back and forth and come back exactly where I aimed, but I don’t get how I can rationally believe that’s any more likely than it doing something else and landing 10 centimeters more on the right. And I feel like what everyone in the comments so far is saying is basically “Well, POI!”, taking it for granted/self-obvious, but afaict, no one has actually justified why we should use POI rather than simply remain radically agnostic on whether the bullet is more likely to hit the target than the kid. I feel like your intuition pump, for example, is implicitly assuming POI and is sort of justifying POI with POI.
But if I know that there are external factors, I know the bullet will deviate for sure. I don’t know where but I know it will.
You assume that blur kernel is non-monotonic, and this is our entire disagreement. I guess that different tasks have different noise structure (for instance, if somehow noise geometrically increased - ±1,±2,…,±2i - we wouldn’t ever return to an exact point we had left).
However, if noise is composed from many i.i.d. small parts, then it has normal distribution which is monotonic in the relevant sense.
I mentioned this in my comment above, but I think it might be worthwhile to differentiate more explicitly between probability distributions and probability density functions. You can have a monotonically-decreasing probability density function F(r) (aka the probability of being in some range is the integral of F(r) over that range, integral over all r values is normalized to 1) and have the expected value of r be as large as you want. That’s because the expected value is the integral of r*F(r), not the value or integral of F(r).
I believe the expected value of r in the stated scenario is large enough that missing is the most likely outcome by far. I am seeing some people argue that the expected distribution is F(r,θ) in a way that is non-uniform in θ, which seems plausible. But I haven’t yet seen anyone give an argument for the claim that the aimed-at point is not the peak of the probability density function, or that we have access to information that allows us to conclude that integrating the density function over the larger-and-aimed-at target region will not give us a higher value than integrating over the smaller-and-not-aimed-at child region
Interesting! I also agree with the superior, but I can see where your intuition might be coming from: if we drop a bouncy ball in the middle of a circle, there will be some bounce to it, and maybe the bounce will always be kinda large, so there might be good reason to think it ending up at rest in the very center is less likely than it ending up off-center. For the sniper’s bullet, however, I think it’s different.
Do you agree with AnthonyC’s view that the bullet’s perturbations are well-modeled by a random walk? If so, maybe I’ll simulate it if I have time and report back—but only makes sense to do that if you agree that the random walk model is appropriate in the first place.
Do you agree with AnthonyC’s view that the bullet’s perturbations are well-modeled by a random walk? If so, maybe I’ll simulate it if I have time and report back—but only makes sense to do that if you agree that the random walk model is appropriate in the first place.
Oh yeah, good question. I’m not sure because random walk models are chaotic and seem to model situations of what Greaves (2016) calls “simple cluelessness”. Here, we’re in a case she would call “complex”. There are systematic reasons to believe the bullet will go right (e.g., the Earth’s rotation, say) and systematic reasons to believe it will go left (e.g., the wind that we see blowing left). The problem is not that it is random/chaotic, but that we are incapable of weighing up the evidence for left vs the evidence for right, incapable to the point where we cannot update away from a radically agnostic prior on whether the bullet will hit the target or the kid.
Oh… wait a minute! I looked up Principal of Indifference, to try and find stronger assertions on when it should or shouldn’t be used, and was surprised to see what it actually means! Wikipedia:
>The principle of indifference states that in the absence of any relevant evidence, agents should distribute their credence (or “degrees of belief”) equally among all the possible outcomes under consideration. In Bayesian probability, this is the simplest non-informative prior.
So I think the superior is wrong to call it “principle of indifference”! You are the one arguing for indifference: “it could hit anywhere in a radius around the targets, and we can’t say more” is POI. “It is more likely to hit the adult you aimed at” is not POI! It’s an argument about the tendency of errors to cancel.
Error cancelling tends to produce Gaussian distributions. POI gives uniform distributions.
I still think I agree with the superior that it’s marginally more likely to hit the target aimed for, but now I disagree with them that this assertion is POI.
So, as you noted in another comment, this depends on your understanding of the nature of the types of errors individual perturbations are likely to induce. I was automatically guessing many small random perturbations that could be approximated by a random walk, under the assumption that any systematic errors are the kind of thing the sniper could at least mostly adjust for even at extreme range. Which I could be easily convinced is completely false in ways I have no ability to concretely anticipate.
That said, whatever assumptions I make about the kinds of errors at play, I am implicitly mapping out some guessed-at probability density function. I can be convinced it skews left or right, down or up. I can be convinced, and already was, that it falls off at a rate such that if I define it in polar coordinates and integrate over theta that the most likely distance-from-targeted-point is some finite nonzero value. (This kind of reasoning comes up sometimes in statistical mechanics, since systems are often not actually at a/the maxentropy state, but instead within some expected phase-space distance of maxentropy, determined by how quickly density of states changes).
But to convince me that the peak of the probability density function is somewhere other than the origin (the intended target), I think I’d have to be given some specific information about the types of error present that the sniper does not have in the scenario, or which the sniper knows but is somehow still unable to adjust for. Lacking such information, then for decision making purposes, other than “You’re almost certainly going to miss” (which I agree with!), it does seem to me that if anyone gets hit, the intended target who also has larger cross-sectional area seems at least a tiny bit more likely.
I think the principle is fine when applied to how variables effect movement of the bullet in space. I don’t necessarily think it means taking the shot is the right call, tactically.
Note: I’ve never fired a real gun in any context, so a lot of the specifics of my reasoning are probably wrong but here goes anyway.
Essentially I see the POI as stating that the bullet takes a 2D random walk with unknown step sizes (though possibly with a known distribution of sizes) on its way to the target. As distance increase, variance in the random walk increases.
Given typical bullet speeds we’re talking about >6 seconds to reach the target, possibly much more depending on drag. And the sniper is actually pointing the rifle so it follows a parabolic arc to the target. In that time the bullet falls .5*g*t^2 meters, so the sniper is actually pointing at a spot at least 180m above his head, possibly 500m if drag makes the expected flight speed even a few seconds longer. More still to the extent the distance means the angle of the shot has to be so high you need to account for more vertical and less horizontal velocity plus more drag. Triple the distance means a lot more than 3x the number of opportunities for random factors to throw off the shot. The random factors are playing plinko with your bullet.
After the first mile (limit of known skill) the expected distance of where the bullet lands from the target increases. At some sufficiently far distance, it is essentially landing in a random spot in a normal distribution around the intended target, and whether it hits the terrorist or the child is mostly a function of how much area each takes up (the adult is larger), and the extent to which one is blocking the other (not stated). Regardless, when the variance is high enough, the most likely outcome is “neither.” It hits the ground meters away, and now the terrorists all know you took the shot and from what direction. If the terrorist just started walking, there might be a better chance of hitting the building than anything else.
So, in a strict probabilistic sense, yes, your probability of hitting the terrorist is still higher than hitting the child. If that is the superior’s sole criterion for decision making, they’ve reasoned correctly. That is not the sniper’s decision making threshold (he wants a higher certainty of avoiding the child). I would expect it is also not the higher-ups’ sole criterion, since it is most likely going to fail and alert the enemy about their position as well as the limits of their sniping capabilities. I have no idea whether the sniper has any legally-defensible way to refuse to follow the order, but if he carries it out, I don’t think issuing the order will reflect well on the superior.
That said: In practice a lot would depend on why the heck they stationed a sniper three miles away from a target with no practical way to turn that positioning into achieving this goal. The first thought I have is that either you’re not where you’re supposed to be, or the people that ordered you there are idiots, and in either case your superior is flailing trying to salvage the situation. The second is that you’re not really expected to take out this target at this range, and your superior is either trying to show off to his superior, or misunderstood his orders, or wasn’t told the real reason for the orders. The third is that of course the sniper would have brought this up hours ago and already figured out the correct decision tree, it’s crazy this conversation is only happening after the terrorist leaves the building.
Say I tell you the bullet landed either 35 centimeters on the target’s right or 42 centimeters on his left, and ask you to bet on which one you think it is. Are you indifferent/agnostic or do you favor 35 very (very very very very) slightly? (If the former, you reject the POI. If the latter, you embrace it. Or at least that’s my understanding. If you don’t find it more likely the bullet hits a spot a bit closer to the target, than you don’t agree with the superior that aiming at the target makes you more likely to hit him over the child, all else equal.)
The latter. And yes, I do agree with the superior on that specific, narrow mathematical question. If I am trying to run with the spirit and letter of the dilemma as presented, then I will bite that bullet (sorry, I couldn’t resist).
In real world situations, at the point where you somehow find yourself in such a position, the correct solution is probably “call in air support and bomb them instead, or find a way to fire many bullets at once- you’ve already decided you’re willing to kill a child for a chance to to take out the target.”
Similarly, if the terrorist were an unfriendly ASI and the child was the entire population of my home country, and there was knowably no one else in position to take any shot at all, I’d (hope I’m the kind of person who would) take the shot. A coin flip is better than certainty of death, even if it were biased against you quite heavily.
Interesting, thanks. My intuition is that if you draw a circle of say a dozen (?) meters around the target, there’s no spot within that circle that is more or less likely to be hit than any other, and it’s only outside the circle than you start having something like a normal distribution. I really don’t see why I should think the 35 centimeters on the target’s right is any more (or less) likely than 42 centimeters on his left. Can you think of any good reason why I should think that? (Not saying my intuition is better than yours. I just want to get where I’m wrong if I am.)
Intuition. Imagine a picture with bright spot in the center, and blur it. The brightest point will still be in center (before rounding pixel values off to the nearest integer, that is; only then may a disk of exactly equiprobable points form).
My answer: because strictly monotonic[1] probability distribution prior to accounting for external factors (either “there might be negligible aiming errors” or “the bullet will fly exactly where needed” are suitable) will remain strictly monotonic when blurred[2] with monotonic kernel[2] formed by those factors (if we assume wind and all that create a normal distribution, it fits).
in this case—distribution with any point close to center having higher probability assigned than a point farther
in image processing sense
Ok so that’s defo what I think assuming no external factors, yes. But if I know that there are external factors, I know the bullet will deviate for sure. I don’t know where but I know it will. And it might luckily deviate a bit back and forth and come back exactly where I aimed, but I don’t get how I can rationally believe that’s any more likely than it doing something else and landing 10 centimeters more on the right. And I feel like what everyone in the comments so far is saying is basically “Well, POI!”, taking it for granted/self-obvious, but afaict, no one has actually justified why we should use POI rather than simply remain radically agnostic on whether the bullet is more likely to hit the target than the kid. I feel like your intuition pump, for example, is implicitly assuming POI and is sort of justifying POI with POI.
You assume that blur kernel is non-monotonic, and this is our entire disagreement. I guess that different tasks have different noise structure (for instance, if somehow noise geometrically increased - ±1,±2,…,±2i - we wouldn’t ever return to an exact point we had left).
However, if noise is composed from many i.i.d. small parts, then it has normal distribution which is monotonic in the relevant sense.
I mentioned this in my comment above, but I think it might be worthwhile to differentiate more explicitly between probability distributions and probability density functions. You can have a monotonically-decreasing probability density function F(r) (aka the probability of being in some range is the integral of F(r) over that range, integral over all r values is normalized to 1) and have the expected value of r be as large as you want. That’s because the expected value is the integral of r*F(r), not the value or integral of F(r).
I believe the expected value of r in the stated scenario is large enough that missing is the most likely outcome by far. I am seeing some people argue that the expected distribution is F(r,θ) in a way that is non-uniform in θ, which seems plausible. But I haven’t yet seen anyone give an argument for the claim that the aimed-at point is not the peak of the probability density function, or that we have access to information that allows us to conclude that integrating the density function over the larger-and-aimed-at target region will not give us a higher value than integrating over the smaller-and-not-aimed-at child region
Interesting! I also agree with the superior, but I can see where your intuition might be coming from: if we drop a bouncy ball in the middle of a circle, there will be some bounce to it, and maybe the bounce will always be kinda large, so there might be good reason to think it ending up at rest in the very center is less likely than it ending up off-center. For the sniper’s bullet, however, I think it’s different.
Do you agree with AnthonyC’s view that the bullet’s perturbations are well-modeled by a random walk? If so, maybe I’ll simulate it if I have time and report back—but only makes sense to do that if you agree that the random walk model is appropriate in the first place.
Oh yeah, good question. I’m not sure because random walk models are chaotic and seem to model situations of what Greaves (2016) calls “simple cluelessness”. Here, we’re in a case she would call “complex”. There are systematic reasons to believe the bullet will go right (e.g., the Earth’s rotation, say) and systematic reasons to believe it will go left (e.g., the wind that we see blowing left). The problem is not that it is random/chaotic, but that we are incapable of weighing up the evidence for left vs the evidence for right, incapable to the point where we cannot update away from a radically agnostic prior on whether the bullet will hit the target or the kid.
Oh… wait a minute! I looked up Principal of Indifference, to try and find stronger assertions on when it should or shouldn’t be used, and was surprised to see what it actually means! Wikipedia:
>The principle of indifference states that in the absence of any relevant evidence, agents should distribute their credence (or “degrees of belief”) equally among all the possible outcomes under consideration. In Bayesian probability, this is the simplest non-informative prior.
So I think the superior is wrong to call it “principle of indifference”! You are the one arguing for indifference: “it could hit anywhere in a radius around the targets, and we can’t say more” is POI. “It is more likely to hit the adult you aimed at” is not POI! It’s an argument about the tendency of errors to cancel.
Error cancelling tends to produce Gaussian distributions. POI gives uniform distributions.
I still think I agree with the superior that it’s marginally more likely to hit the target aimed for, but now I disagree with them that this assertion is POI.
So, as you noted in another comment, this depends on your understanding of the nature of the types of errors individual perturbations are likely to induce. I was automatically guessing many small random perturbations that could be approximated by a random walk, under the assumption that any systematic errors are the kind of thing the sniper could at least mostly adjust for even at extreme range. Which I could be easily convinced is completely false in ways I have no ability to concretely anticipate.
That said, whatever assumptions I make about the kinds of errors at play, I am implicitly mapping out some guessed-at probability density function. I can be convinced it skews left or right, down or up. I can be convinced, and already was, that it falls off at a rate such that if I define it in polar coordinates and integrate over theta that the most likely distance-from-targeted-point is some finite nonzero value. (This kind of reasoning comes up sometimes in statistical mechanics, since systems are often not actually at a/the maxentropy state, but instead within some expected phase-space distance of maxentropy, determined by how quickly density of states changes).
But to convince me that the peak of the probability density function is somewhere other than the origin (the intended target), I think I’d have to be given some specific information about the types of error present that the sniper does not have in the scenario, or which the sniper knows but is somehow still unable to adjust for. Lacking such information, then for decision making purposes, other than “You’re almost certainly going to miss” (which I agree with!), it does seem to me that if anyone gets hit, the intended target who also has larger cross-sectional area seems at least a tiny bit more likely.