You say that like it isn’t evidence, rather than simply being less powerful than it could have been. There is only a 5% chance of getting a false positive with a 95% confidence interval. Ignoring additional evidence will not change that.
But that does not tell you that how likely a given positive result is a false positive, you’d also need to know what fraction of (implicitly) tested hypotheses is true.
If it’s on the border of the confidence interval, the probability of false positive is 50%. If it’s twice as far as that, it’s 5%. That should give an okay idea of where the range is for that. I’d much prefer something with a 99% confidence interval, but a 95% one is still pretty good. If the effect is just barely statistically significant, the odds ratio is still 10:1
I’m not sure how likely it is for their hypothesis to be true, but it’s likely enough for them to risk spending money checking.
But that does not tell you that how likely a given positive result is a false positive, you’d also need to know what fraction of (implicitly) tested hypotheses is true.
If it’s on the border of the confidence interval, the probability of false positive is 50%. If it’s twice as far as that, it’s 5%. That should give an okay idea of where the range is for that. I’d much prefer something with a 99% confidence interval, but a 95% one is still pretty good. If the effect is just barely statistically significant, the odds ratio is still 10:1
I’m not sure how likely it is for their hypothesis to be true, but it’s likely enough for them to risk spending money checking.