Right, exactly. But this isn’t only about satellite tracking. A lot of the time you don’t have the luxury of comparing the high-precision estimate to the low-precision estimate. You’re only talking to the second guy, and it’s important not to take his apparent confidence at face value. Maybe this is obvious to you, but a lot of the content on this site is about explicating common errors of logic and statistics that people might fall for. I think it’s valuable.
In the satellite tracking example, the thing to do is exactly as you say: whatever the error bars on your measurements, treat that as the effective size of the satellite. If you can only resolve positions to within 100 meters, then any approach within 100 meters counts as a “collision.”
I’m also curious about the “likelihood-based sampling distribution framework” mentioned in the cited arXiv paper. The paper claims that “this alternative interpretation is not problematic,” but it seems like its interpretation of the satellite example is substantially identical to the Bayesian interpretation. The lesson to draw from the false confidence theorem is “be careful,” not “abandon all the laws of ordinary statistics in favor of an alternative conception of uncertainty.”
Maybe this is obvious to you, but a lot of the content on this site is about explicating common errors of logic and statistics that people might fall for. I think it’s valuable.
Thank you. Maybe I over-indexed on using the satellite example, but I thought it made for a better didactic example in part because it was so obvious. I provided the other examples to point to cases where I thought the error was less clear.
The lesson to draw from the false confidence theorem is “be careful,” not “abandon all the laws of ordinary statistics in favor of an alternative conception of uncertainty.”
This is also true. Like I said (maybe not very clearly), there’s more or less 2 solutions—use non-epistemtic belief to represent uncertainty, or avoid using epistemic uncertainty in probability calculations. (And you might even be able to sort of squeeze the former solution into Bayesian representation by always including “something I haven’t thought of” to include some of your probability mass, which I think is something Eliezer has even suggested. I haven’t thought about this part in detail.)
I imagine the conversation in the control room where they’re tracking the satellites and deciding whether to have one of them make a burn:
“What’s the problem? 99.9% chance they’re safe!”
“We’re looking at 70%.” [Gestures at all the equipment receiving data and plotting projected paths.] “Where did you pull 99.9% from?”
“Well, how often does a given pair of satellites collide? Pretty much never, right? Outside view, man, outside view!”
“You’re fired. Get out of the room and leave this to the people who have a clue.”
Right, exactly. But this isn’t only about satellite tracking. A lot of the time you don’t have the luxury of comparing the high-precision estimate to the low-precision estimate. You’re only talking to the second guy, and it’s important not to take his apparent confidence at face value. Maybe this is obvious to you, but a lot of the content on this site is about explicating common errors of logic and statistics that people might fall for. I think it’s valuable.
In the satellite tracking example, the thing to do is exactly as you say: whatever the error bars on your measurements, treat that as the effective size of the satellite. If you can only resolve positions to within 100 meters, then any approach within 100 meters counts as a “collision.”
I’m also curious about the “likelihood-based sampling distribution framework” mentioned in the cited arXiv paper. The paper claims that “this alternative interpretation is not problematic,” but it seems like its interpretation of the satellite example is substantially identical to the Bayesian interpretation. The lesson to draw from the false confidence theorem is “be careful,” not “abandon all the laws of ordinary statistics in favor of an alternative conception of uncertainty.”
Thank you. Maybe I over-indexed on using the satellite example, but I thought it made for a better didactic example in part because it was so obvious. I provided the other examples to point to cases where I thought the error was less clear.
This is also true. Like I said (maybe not very clearly), there’s more or less 2 solutions—use non-epistemtic belief to represent uncertainty, or avoid using epistemic uncertainty in probability calculations. (And you might even be able to sort of squeeze the former solution into Bayesian representation by always including “something I haven’t thought of” to include some of your probability mass, which I think is something Eliezer has even suggested. I haven’t thought about this part in detail.)