As far as I can tell, epidemiology and medicine are mostly doing (c), in the form of RCTs (which are the gold standard of medical evidence, other than meta-reviews). There are other study designs such as most variants of case-control studies and cohort studies which do take the (a) approach, but they aren’t considered to be the same level of evidence as randomized controlled trials.
but they aren’t considered to be the same level of evidence as randomized controlled trials.
Quite rightly—if we randomize, we don’t care what the underlying causal structure is, we just cut all confounding out anyways. Methods (a), (b), (d) all rely on various structural assumptions that may or may not hold. However, even given those assumptions figuring out how to do causal inference from observational data is quite difficult. The problem with RCTs is expense, ethics, and statistical power (hard to enroll a ton of people in an RCT).
Epidemiology and medicine does a lot of (a), look for the keywords “g-formula”, “g-estimation”, “inverse probability weighting,” “propensity score”, “marginal structural models,” “structural nested models”, “covariate adjustment,” “back-door criterion”, etc. etc.
People talk about “controlling for other factors” when discussing associations all the time, even in non-technical press coverage. They are talking about (a).
People talk about “controlling for other factors” when discussing associations all the time, even in non-technical press coverage. They are talking about (a).
True, true. “Gold standard” or “preferred level of evidence” versus “what’s mostly conducted given the funding limitations”. However, to make it into a guideline, there are often RCT follow-ups for hopeful associations uncovered by the lesser study designs.
look for the keywords “g-formula”, “g-estimation”, “inverse probability weighting,” “propensity score”, “marginal structural models,” “structural nested models”, “covariate adjustment,” “back-door criterion”, etc. etc.
I, of course, know all of those. The letters, I mean.
As far as I can tell, epidemiology and medicine are mostly doing (c), in the form of RCTs (which are the gold standard of medical evidence, other than meta-reviews). There are other study designs such as most variants of case-control studies and cohort studies which do take the (a) approach, but they aren’t considered to be the same level of evidence as randomized controlled trials.
Quite rightly—if we randomize, we don’t care what the underlying causal structure is, we just cut all confounding out anyways. Methods (a), (b), (d) all rely on various structural assumptions that may or may not hold. However, even given those assumptions figuring out how to do causal inference from observational data is quite difficult. The problem with RCTs is expense, ethics, and statistical power (hard to enroll a ton of people in an RCT).
Epidemiology and medicine does a lot of (a), look for the keywords “g-formula”, “g-estimation”, “inverse probability weighting,” “propensity score”, “marginal structural models,” “structural nested models”, “covariate adjustment,” “back-door criterion”, etc. etc.
People talk about “controlling for other factors” when discussing associations all the time, even in non-technical press coverage. They are talking about (a).
True, true. “Gold standard” or “preferred level of evidence” versus “what’s mostly conducted given the funding limitations”. However, to make it into a guideline, there are often RCT follow-ups for hopeful associations uncovered by the lesser study designs.
I, of course, know all of those. The letters, I mean.