So this does really happen, a frustratingly large amount of my time is spent convincing other epidemiologists to do something/anything to control for multiple comparisons. I’ve got two additional comments to the article. First, its worse than that. There is a genuine lack of multiple comparison control when looking at just published results, but this is just the tip of the iceberg. There are a ton of analyses that get run in the name of “understanding the data” that get tossed when you finally find something publishable. Second, this kind of stuff isnt limited to observational epi. There are plenty of non-FDA scrutinized randomized trials (just look at the social science or education literature) where this kind of thing happens. “Oh well the curriculum we implemented didnt reduce alcohol, violence, unprotected sex, or marijuana; but cigarette use went down in the intervention schools!! !” The only way this stuff stops is if journal editors start taking it more seriously. We should start requiring a priori hypothesis specification for even observational studies, null results should be published, and multiple comparisons should be adjusted for; and for the love of all that is good in this world keep publishing replications till no one has any reasonable doubts about the relationship under question.
From Reddit: