This is too nihilistic and is not really what experts like Ioannidis are proposing. Better to evaluate the studies (or find sources that evaluate the studies) individually for their sample size and statistical measures, such as whether or not they control for relevant covariates and do multiple hypothesis testing corrections.
For example, they’ll often inflate the effects by contrasting the most extreme groups (upper vs lower 20%).
Or, just basic biases, like the winner’s curse (large effects tend to come from studies with small sample sizes—you can see this by comparing the log of treatment effect vs the log of total sample size in the cochrane database) or publication bias (leading to missing data).
Odds ratios in randomized trials also decrease over time.
Generally, Ioannidis wants massive testing via biobanks (sample sizes in the millions), longitudinal measurements, and large-scale global collaborations. These do not necessarily mean only randomized trials, and in fact they are pretty much impossible for that kind of data set. Epi can work too, it just needs to be done well.
It would be nice to have what Ioannidis suggests, but what do we do in the decades (or ever) before those suggestions happen? Throwing out the correlations seems like the best idea to me − 20% of randomized trials having issues is a win in a way that 80% of results with serious issues is not.
Certainly not all correlations are useless. This feels like I am breaking some analogue of Godwin’s law, but just consider the association between cigarette smoke and some types of cancer. Generally, discounting correlations and treating them with more skepticism seem like good ideas. But “throwing out” seems needlessly harsh to me, unless for some reason you are in a hurry, in which case you should think about deferring to more expert sources anyways.
For example, this useful source http://www.informationisbeautiful.net/play/snake-oil-supplements/ (see the spreadsheet at the link) uses mostly randomized trials but also includes some studies which discuss prospective associations. I don’t think the organizers should be criticized for including the correlations.
This feels like I am breaking some analogue of Godwin’s law, but just consider the association between cigarette smoke and some types of cancer.
It seems like everyone wants to bring up tobacco as the justification for such irresponsibility—it paid off once, so we should keep doing it… See my reply to http://news.ycombinator.com/item?id=2870962 (since they brought up tobacco before you did).
Recently it was announced that some organization (It thought it was the SIAI but i can’t find it in their blog) would work to form a panel in order to examine and disambiguate the state of knowledge about a number of different areas, the first being diet, nutrition and exercise. It seems imperative that they take this into consideration. What was this organization, and do we have any way of knowing whether they will or not?
My own opinion of that proposal (I’m not sure whether I said this elsewhere) is that the Group is already being done, and better, by things like the Cochrane Collaboration. There is no comparative advantage there.
That was my thought as well, although if this group were formed I’d be extremely interested in how they worked and what their findings were. I’d imagine Bayesian methods would be the norm, which might give them a leg up.
It would be particularly interesting if they consistently disagreed with mainstream systematic reviews.
This is too nihilistic and is not really what experts like Ioannidis are proposing. Better to evaluate the studies (or find sources that evaluate the studies) individually for their sample size and statistical measures, such as whether or not they control for relevant covariates and do multiple hypothesis testing corrections.
You can download a video of Ioannidis’ Mar ’11 lecture on nutrition from http://videocast.nih.gov/PastEvents.asp?c=144 (it’s big though, 250 MB). Some notes:
Randomized trials have problems too.
For example, they’ll often inflate the effects by contrasting the most extreme groups (upper vs lower 20%).
Or, just basic biases, like the winner’s curse (large effects tend to come from studies with small sample sizes—you can see this by comparing the log of treatment effect vs the log of total sample size in the cochrane database) or publication bias (leading to missing data).
Odds ratios in randomized trials also decrease over time.
Generally, Ioannidis wants massive testing via biobanks (sample sizes in the millions), longitudinal measurements, and large-scale global collaborations. These do not necessarily mean only randomized trials, and in fact they are pretty much impossible for that kind of data set. Epi can work too, it just needs to be done well.
It would be nice to have what Ioannidis suggests, but what do we do in the decades (or ever) before those suggestions happen? Throwing out the correlations seems like the best idea to me − 20% of randomized trials having issues is a win in a way that 80% of results with serious issues is not.
Certainly not all correlations are useless. This feels like I am breaking some analogue of Godwin’s law, but just consider the association between cigarette smoke and some types of cancer. Generally, discounting correlations and treating them with more skepticism seem like good ideas. But “throwing out” seems needlessly harsh to me, unless for some reason you are in a hurry, in which case you should think about deferring to more expert sources anyways.
For example, this useful source http://www.informationisbeautiful.net/play/snake-oil-supplements/ (see the spreadsheet at the link) uses mostly randomized trials but also includes some studies which discuss prospective associations. I don’t think the organizers should be criticized for including the correlations.
It seems like everyone wants to bring up tobacco as the justification for such irresponsibility—it paid off once, so we should keep doing it… See my reply to http://news.ycombinator.com/item?id=2870962 (since they brought up tobacco before you did).
Recently it was announced that some organization (It thought it was the SIAI but i can’t find it in their blog) would work to form a panel in order to examine and disambiguate the state of knowledge about a number of different areas, the first being diet, nutrition and exercise. It seems imperative that they take this into consideration. What was this organization, and do we have any way of knowing whether they will or not?
Are you referring to the Persistent Problems Group?
My own opinion of that proposal (I’m not sure whether I said this elsewhere) is that the Group is already being done, and better, by things like the Cochrane Collaboration. There is no comparative advantage there.
That was my thought as well, although if this group were formed I’d be extremely interested in how they worked and what their findings were. I’d imagine Bayesian methods would be the norm, which might give them a leg up.
It would be particularly interesting if they consistently disagreed with mainstream systematic reviews.
Yes, thanks.