One can argue about whether, in theory, there’s a difference there or if true randomness exists. But I think that it’s irrelevant and that practically speaking, there is a difference. In the case where collision is actually going to happen, more epistemic uncertainty (i.e. worse measurement data) cannot change that fact, but more aleatory variability (i.e. applying random forces to the satellites) can actually make them less likely to collide. Asking whether the “random” forces applied could theoretically be known does not change that fact.
Similarly, in the Knox example I gave above, it made sense for LW readers to say 35% chance of guilt (because they haven’t investigated) and it also made sense for komponsito to say 1/1000 chance (because he had). LW readers were not saying that more than 1 out of 3 times where the evidence looked the way it did, the accused was guilty; they were just expressing the fact that they had not looked at the evidence. Komponsito was saying that the accused is guilty 1 time out of a thousand, because he had looked at it.
I am confused about the satellites example, however. If two satellites had, say, a 15% chance of collision (by the best we could measure), when we apply random forces to decrease that chance of collision, I think it’s maybe a bit of a deceptive oversimplification to say that we’re just applying random forces? Because really it’s more like… maybe there’s 360 degrees each satellite could head to, and 45 degrees of danger zone, and we’re applying forces that will attempt to push outside of that danger zone and into the other degrees. So we might become less certain about its eventual location, but more certain that it won’t fall into the danger zone, which is the part we care about.
So we might become less certain about its eventual location, but more certain that it won’t fall into the danger zone, which is the part we care about.
One can argue about whether, in theory, there’s a difference there or if true randomness exists. But I think that it’s irrelevant and that practically speaking, there is a difference. In the case where collision is actually going to happen, more epistemic uncertainty (i.e. worse measurement data) cannot change that fact, but more aleatory variability (i.e. applying random forces to the satellites) can actually make them less likely to collide. Asking whether the “random” forces applied could theoretically be known does not change that fact.
Similarly, in the Knox example I gave above, it made sense for LW readers to say 35% chance of guilt (because they haven’t investigated) and it also made sense for komponsito to say 1/1000 chance (because he had). LW readers were not saying that more than 1 out of 3 times where the evidence looked the way it did, the accused was guilty; they were just expressing the fact that they had not looked at the evidence. Komponsito was saying that the accused is guilty 1 time out of a thousand, because he had looked at it.
Ah, the Knox example is clear, thank you!
I am confused about the satellites example, however. If two satellites had, say, a 15% chance of collision (by the best we could measure), when we apply random forces to decrease that chance of collision, I think it’s maybe a bit of a deceptive oversimplification to say that we’re just applying random forces? Because really it’s more like… maybe there’s 360 degrees each satellite could head to, and 45 degrees of danger zone, and we’re applying forces that will attempt to push outside of that danger zone and into the other degrees. So we might become less certain about its eventual location, but more certain that it won’t fall into the danger zone, which is the part we care about.
Yeah, I think that’s exactly the point?