The “paradox” here is that when one person says there’s a 70% chance that the satellites are safe, and another says there’s a 99.9% chance that they’re safe, it sounds like the second person must be much more certain about what’s going on up there. But in this case, the opposite is true.
When someone says “there’s a 99.9% chance that the satellites won’t collide,” we naturally imagine that this statement is being generated by a process that looks like “I performed a high-precision measurement of the closest approach distance, my central estimate is that there won’t be a collision, and the case where there is a collision is off in the wings of my measurement error such that it has a lingering 0.1% chance.” But the same probability estimate can be generated by a very low-precision measurement with a central estimate that there will be a collision. The former case is cause to relax; the latter is not. Yeah, in a sense this is obvious. But it’s a reminder that seeing a probability estimate isn’t a substitute for real diligence.
Right, exactly. But this isn’t only about satellite tracking. A lot of the time you don’t have the luxury of comparing the high-precision estimate to the low-precision estimate. You’re only talking to the second guy, and it’s important not to take his apparent confidence at face value. Maybe this is obvious to you, but a lot of the content on this site is about explicating common errors of logic and statistics that people might fall for. I think it’s valuable.
In the satellite tracking example, the thing to do is exactly as you say: whatever the error bars on your measurements, treat that as the effective size of the satellite. If you can only resolve positions to within 100 meters, then any approach within 100 meters counts as a “collision.”
I’m also curious about the “likelihood-based sampling distribution framework” mentioned in the cited arXiv paper. The paper claims that “this alternative interpretation is not problematic,” but it seems like its interpretation of the satellite example is substantially identical to the Bayesian interpretation. The lesson to draw from the false confidence theorem is “be careful,” not “abandon all the laws of ordinary statistics in favor of an alternative conception of uncertainty.”