There’s some truth here, in the same sense that probability assignments can never be 0 or 1. There’s always some chance, and always some causal link.
HOWEVER, a lot of things can get so low that it’s unmeasurable and inconsequential at the level we’re talking about. Different purposes of modeling will have different thresholds for rounding to 0, but almost all of them will benefit from doing so. Making this explicit will sometimes help, and is sometimes useful for reminding yourself of your limits of measurement and understanding.
Unmeasurably small does NOT mean nonexistent. But it DOES mean it’s small. If you also have analytic reasons to think it’s VERY SMALL (say, 0.000001), or you know of much larger features (say 100x or more), it’s perfectly reasonable to ignore the tiny ones.
Indeed, I fully agree with this. Yet when deciding that something is so small that it’s not relevant, it’s (in my view anyway) important to be mindful of that, and to be transparent about your “relevance threshold”, as other people may disagree about it.
Personally I think it’s perfectly fine for people to consciously say “the effect size of this is likely so close to 0 we can ignore it” rather than “there is no effect”, because the former may well be completely true, while the latter hints at a level of ignorance that leaves the door for conceptual mistakes wide open.
There’s some truth here, in the same sense that probability assignments can never be 0 or 1. There’s always some chance, and always some causal link.
HOWEVER, a lot of things can get so low that it’s unmeasurable and inconsequential at the level we’re talking about. Different purposes of modeling will have different thresholds for rounding to 0, but almost all of them will benefit from doing so. Making this explicit will sometimes help, and is sometimes useful for reminding yourself of your limits of measurement and understanding.
Unmeasurably small does NOT mean nonexistent. But it DOES mean it’s small. If you also have analytic reasons to think it’s VERY SMALL (say, 0.000001), or you know of much larger features (say 100x or more), it’s perfectly reasonable to ignore the tiny ones.
Indeed, I fully agree with this. Yet when deciding that something is so small that it’s not relevant, it’s (in my view anyway) important to be mindful of that, and to be transparent about your “relevance threshold”, as other people may disagree about it.
Personally I think it’s perfectly fine for people to consciously say “the effect size of this is likely so close to 0 we can ignore it” rather than “there is no effect”, because the former may well be completely true, while the latter hints at a level of ignorance that leaves the door for conceptual mistakes wide open.