Some of the material will be familiar, but there are examples I hadn’t seen before of how really hard it is to be sure you’ve asked the right question and squeezed out the sources of error in the answer.
What follows is what I consider to be a good parts summary—if you want more theory, you should read the article.
Consider a study published in the NEJM that showed an association between diabetes and pancreatic cancer.[3] The casual reader might conclude that diabetes causes pancreatic cancer. However, further analysis showed that much of the diabetes was of recent onset. The pancreatic cancer preceded the diabetes, and the cancer subsequently destroyed the insulin-producing islet cells of the pancreas. Therefore, this was not a case of diabetes causing pancreatic cancer but of pancreatic cancer causing the diabetes.
....
To illustrate the point, consider the ISIS-2 trial,[8] which showed reduced mortality in patients given aspirin after myocardial infarction. However, subgroup analyses identified some patients who did not benefit: those born under the astrological signs of Gemini and Libra; patients born under other zodiac signs derived a clear benefit with a P value < .00001.
I guessed at a seasonal effect, but Gemini and Libra aren’t adjacent signs.
The frequency of these false-positive studies in the published literature can be estimated to some degree.[2] Consider a situation in which 10% of all hypotheses are actually true. Now consider that most studies have a type 1 error rate (the probability of claiming an association when none exists [ie, a false positive]) of 5% and a type 2 error rate (the probability of claiming there is no association when one actually exists [ie, a false negative)] of 20%, which are the standard error rates presumed by most clinical trials. This allows us to create the following 2x2 table.
I didn’t realize that the false negative effect (not seeing a relationship when there actually is one) is higher than the false positive rate. This might mean that a lot of useful medical tools get eliminated before they’can be explored.
Also (credit given to Seth Roberts), if a minority of people respond very well to a treatment being tested, this is very unlikely to be explored because the experiment is structured to see whether the treatment is good for people in general (actually, people in general in the group being tested). This wasn’t in the NEJM piece.
One classic example of selection bias occurred in 1981 with a NEJM study showing an association between coffee consumption and pancreatic cancer.[15] The selection bias occurred when the controls were recruited for the study. The control group had a high incidence of peptic ulcer disease, and so as not to worsen their symptoms, they drank little coffee. Thus, the association between coffee and cancer was artificially created because the control group was fundamentally different from the general population in terms of their coffee consumption. When the study was repeated with proper controls, no effect was seen.[16]
....
Information bias, as opposed to selection bias, occurs when there is a systematic error in how the data are collected or measured. Misclassification bias occurs when the measurement of an exposure or outcome is imperfect; for example, smokers who identify themselves as nonsmokers to investigators or individuals who systematically underreport their weight or overreport their height.[17] A special situation, known as recall bias, occurs when subjects with a disease are more likely to remember the exposure under investigation than controls. In the INTERPHONE study, which was designed to investigate the association between cell phones and brain tumors, a spot-check of mobile phone records for cases and controls showed that random recall errors were large for both groups with an overestimation among cases for more distant time periods.[18] Such differential recall could induce an association between cell phones and brain tumors even if none actually exists.
::::
An interesting type of information bias is the ecological fallacy. The ecological fallacy is the mistaken belief that population-level exposures can be used to draw conclusions about individual patient risks.[4] A recent example of the ecological fallacy, was a tongue-in-cheek NEJM study by Messerli[19} showing that countries with high chocolate consumption won more Nobel prizes. The problem with country-level data is that countries don’t eat chocolate, and countries don’t win Nobel prizes. People eat chocolate, and people win Nobel prizes. This study, while amusing to read, did not establish the fundamental point that the individuals who won the Nobel prizes were the ones actually eating the chocolate.[20]
On the other hand, if you want to improve the odds of your children winning a Nobel, maybe you should move to a chocolate-eating country.
.....
A 1996 study sought to compare laparoscopic vs open appendectomy for appendicitis.[29] The study worked well during the day, but at night the presence of the attending surgeon was required for the laparoscopic cases but not the open cases. Consequently, the on-call residents, who didn’t like calling in their attendings, adopted a practice of holding the translucent study envelopes up to the light to see if the person was randomly assigned to open or laparoscopic surgery. When they found an envelope that allocated a patient to the open procedure (which would not require calling in the attending and would therefore save time), they opened that envelope and left the remaining laparoscopic envelopes for the following morning. Because cases operated on at night were presumably sicker than those that could wait until morning, the actions of the on-call team biased the results. Sicker cases preferentially got open surgery, making the outcomes of the open procedure look worse than they actually were.[30] So, though randomized trials are often thought of as the solution to confounding, if randomization is not handled properly, confounding can still occur. In this case, an opaque envelope would have solved the problem.
Remembering that humans aren’t especially compliant is hard.
From reading Guinea Pig Zero: The Journal for Human Research Subjects—human beings are not necessarily going to comply with onerous food regimes. I expect that most who don’t simply don’t want to, but the magazine had the argument of not wanting to comply because the someone who’s a human research subject is never going to be able to afford treatment based on the results of the research.
It Ain’t Necessarily So: Why Much of the Medical Literature Is Wrong
Some of the material will be familiar, but there are examples I hadn’t seen before of how really hard it is to be sure you’ve asked the right question and squeezed out the sources of error in the answer.
What follows is what I consider to be a good parts summary—if you want more theory, you should read the article.
....
I guessed at a seasonal effect, but Gemini and Libra aren’t adjacent signs.
I didn’t realize that the false negative effect (not seeing a relationship when there actually is one) is higher than the false positive rate. This might mean that a lot of useful medical tools get eliminated before they’can be explored.
Also (credit given to Seth Roberts), if a minority of people respond very well to a treatment being tested, this is very unlikely to be explored because the experiment is structured to see whether the treatment is good for people in general (actually, people in general in the group being tested). This wasn’t in the NEJM piece.
....
::::
An interesting type of information bias is the ecological fallacy. The ecological fallacy is the mistaken belief that population-level exposures can be used to draw conclusions about individual patient risks.[4] A recent example of the ecological fallacy, was a tongue-in-cheek NEJM study by Messerli[19} showing that countries with high chocolate consumption won more Nobel prizes. The problem with country-level data is that countries don’t eat chocolate, and countries don’t win Nobel prizes. People eat chocolate, and people win Nobel prizes. This study, while amusing to read, did not establish the fundamental point that the individuals who won the Nobel prizes were the ones actually eating the chocolate.[20]
On the other hand, if you want to improve the odds of your children winning a Nobel, maybe you should move to a chocolate-eating country.
.....
Remembering that humans aren’t especially compliant is hard.
From reading Guinea Pig Zero: The Journal for Human Research Subjects—human beings are not necessarily going to comply with onerous food regimes. I expect that most who don’t simply don’t want to, but the magazine had the argument of not wanting to comply because the someone who’s a human research subject is never going to be able to afford treatment based on the results of the research.