I didn’t look for, and so was not aware of, any larger community. I found the 2 linked papers and, once I realized what was going on, recognized the apparent error in a few places. I agree that “decreasing the quality of your data should not make you more confident” is obvious when stated that way, but like with many “obvious” insights, the problem often comes in recognizing it when it comes up. I attempted to point this out to Micheal Weissman in one of the ACX threads (he did a Bayesian analysis of lab leak, similar to Rootclaim’s) and he repeatedly defended arguments of this form even after I pointed out that he was getting reasonably large Bayes Factors based entirely on epistemic uncertainty.
Did you read section 2c of the paper? It seems to be saying something very similar to the point you made about the tracking uncertainty:
for a fixed S/R [relative uncertainty of the closest approach distance] ratio, there is a maximum computable epistemic probability of collision. Whether or not the two satellites are on a collision course, no matter what the data indicate, the analyst will have a minimum confidence that the two satellites will not collide. That minimum confidence is determined purely by the data quality… For example, if the uncertainty in the distance between two satellites at closest approach is ten times the combined size of the two satellites, the analyst will always compute at least a 99.5% confidence that the satellites are safe, even if, in reality, they are not…
So when you say
then you must content yourself with avoiding approaches within around 100 metres, and you will be on the equivalent of the yellow line in that figure.
Is this not essentially what the confidence region approach is doing?
I didn’t look for, and so was not aware of, any larger community. I found the 2 linked papers and, once I realized what was going on, recognized the apparent error in a few places. I agree that “decreasing the quality of your data should not make you more confident” is obvious when stated that way, but like with many “obvious” insights, the problem often comes in recognizing it when it comes up. I attempted to point this out to Micheal Weissman in one of the ACX threads (he did a Bayesian analysis of lab leak, similar to Rootclaim’s) and he repeatedly defended arguments of this form even after I pointed out that he was getting reasonably large Bayes Factors based entirely on epistemic uncertainty.
Did you read section 2c of the paper? It seems to be saying something very similar to the point you made about the tracking uncertainty:
So when you say
Is this not essentially what the confidence region approach is doing?