I run into this problem every time I read anything on health or medicine (it seems limited to these topics).
This is an interesting point in itself. Why health and medicine?
Maybe causal inference is straight up more difficult in health and medicine: effects are smaller and more ambiguous than in hard sciences, and have many hard-to-manipulate causes that blur the signal.
There are borderline results in fields like physics, obviously, but they’re usually more esoteric and tend to have relatively clear cut theory behind them (which is why I’d guess you’re not too worried about, say, last year’s ambiguous results from the Cryogenic Dark Matter Search), so they don’t provoke so much back-and-forthing.
This leads me to a prediction: you’d have as much difficulty reading up on results in psychology and sociology as you do in health and medicine. As for what to do about it? Uh...not sure. I’m still chewing over this thread.
This is an interesting point in itself. Why health and medicine?
Maybe causal inference is straight up more difficult in health and medicine: effects are smaller and more ambiguous than in hard sciences, and have many hard-to-manipulate causes that blur the signal.
My self-serving explanation is that health/ medicine/ biology select for people who enjoy (or better tolerate) rote memorization (Levels 0-1 in my hierarchy) rather than “how it works”-type understanding (Levels 2-3). This gets the group of intelligent people with worse ability to know the broader meaning of what they’re doing, a skill that tends to curtail questionable statistical practices.
Yes, I know it sounds insulting, but what really turned me off from taking more biology in high school and college, and from med school, is that it’s so much more memorization-oriented rather than generative-model oriented. This suspicion is confirmed when I hear about e.g. ecologists just now getting around to using the method of adjacency matrix eigenvectors (i.e., Google’s PageRank) to identify key organisms in ecosystems.
And an alternate alternate explanation: Poor priorities. Doctors want to hear all the clinical details, and are mentally worn out by the time they finish with those. There’s just no time or energy to do the math too.
When I used to work for NASA in theoretical air traffic management, I’d try to explain some abstract point about turbulent or chaotic traffic flow to operational FAA guys, and they would get bogged-down in details about what kind of planes we were talking about, what altitudes they were flying at, which airlines they belonged to, and on and on.
You won’t hear that phrase, but I mean the theoretical study of air traffic. For instance, I studied “free flight”, which is when airplanes can fly directly from their takeoff airport to their landing airport and manage their own collision-avoidance, and showed that in certain free-flight situations you can increase the throughput of the airspace by reducing the information that you give to pilots.
It’s an interesting general phenomenon: They try to optimize their route using all of their information. So the more information they have, the more unpredictable their behavior is. More information can actually cause more trouble than it solves in some cases, at least when it’s information about what other agents are doing.
I had a contract from NASA to look for chaotic behavior in en-route air traffic, but my conclusion was that it is unlikely and nothing need be done to avoid it at present.
You don’t get these problems with economics. In economics journals its standard practice to include your specification, as well as the whole regression output, including a full list of included terms and their significance tests.
When I was completing my Master’s degree I was a sessional assistant for an introductory quantitative methods course for economics and finance majors. The type of simple linear regression would be considered overly simplistic at that level (at least in the absence of some simple specification testing), and if the j curve is already accepted in medicine, to model linearly is unforgivable. It’s not like non-linear transformations are hard to do either, you can do them in Excel without too much trouble.
FWIW, I’m of the impression that economists get a better grounding in quantitative methods than other social scientists (and I would say that the profession is a bit too keen on mathematical approaches in some cases), so maybe you would have similar problems with psychology or sociology. But I don’t think economics has this problem.
There are borderline results in fields like physics, obviously, but they’re usually more esoteric and tend to have relatively clear cut theory behind them (which is why I’d guess you’re not too worried about, say, last year’s ambiguous results from the Cryogenic Dark Matter Search), so they don’t provoke so much back-and-forthing.
Also, maybe more importantly, less in the way of financial and ideological commitment.
(Incidentally, my impression is that theoretical debates are more intense in physics than medicine, though I don’t know much about such theoretical debates as might exist in medicine.)
This is an interesting point in itself. Why health and medicine?
Maybe causal inference is straight up more difficult in health and medicine: effects are smaller and more ambiguous than in hard sciences, and have many hard-to-manipulate causes that blur the signal.
There are borderline results in fields like physics, obviously, but they’re usually more esoteric and tend to have relatively clear cut theory behind them (which is why I’d guess you’re not too worried about, say, last year’s ambiguous results from the Cryogenic Dark Matter Search), so they don’t provoke so much back-and-forthing.
This leads me to a prediction: you’d have as much difficulty reading up on results in psychology and sociology as you do in health and medicine. As for what to do about it? Uh...not sure. I’m still chewing over this thread.
My self-serving explanation is that health/ medicine/ biology select for people who enjoy (or better tolerate) rote memorization (Levels 0-1 in my hierarchy) rather than “how it works”-type understanding (Levels 2-3). This gets the group of intelligent people with worse ability to know the broader meaning of what they’re doing, a skill that tends to curtail questionable statistical practices.
Yes, I know it sounds insulting, but what really turned me off from taking more biology in high school and college, and from med school, is that it’s so much more memorization-oriented rather than generative-model oriented. This suspicion is confirmed when I hear about e.g. ecologists just now getting around to using the method of adjacency matrix eigenvectors (i.e., Google’s PageRank) to identify key organisms in ecosystems.
And an alternate alternate explanation: Poor priorities. Doctors want to hear all the clinical details, and are mentally worn out by the time they finish with those. There’s just no time or energy to do the math too.
When I used to work for NASA in theoretical air traffic management, I’d try to explain some abstract point about turbulent or chaotic traffic flow to operational FAA guys, and they would get bogged-down in details about what kind of planes we were talking about, what altitudes they were flying at, which airlines they belonged to, and on and on.
What is theoretical air traffic management?
I mean, in more detail than I can glean just from knowing what those words mean.
You won’t hear that phrase, but I mean the theoretical study of air traffic. For instance, I studied “free flight”, which is when airplanes can fly directly from their takeoff airport to their landing airport and manage their own collision-avoidance, and showed that in certain free-flight situations you can increase the throughput of the airspace by reducing the information that you give to pilots.
It’s an interesting general phenomenon: They try to optimize their route using all of their information. So the more information they have, the more unpredictable their behavior is. More information can actually cause more trouble than it solves in some cases, at least when it’s information about what other agents are doing.
I had a contract from NASA to look for chaotic behavior in en-route air traffic, but my conclusion was that it is unlikely and nothing need be done to avoid it at present.
Reminds me of Braess’s paradox; a route people don’t know about is similar to one that doesn’t exist.
Here’s an alternate insulting explanation, based on many (but not all) of the doctors I’ve met:
Doctors are bad at being unsure of themselves.
You’re right. I mentally boxed them off when I made my original statement. Thinking further, I might add economics to the list.
You don’t get these problems with economics. In economics journals its standard practice to include your specification, as well as the whole regression output, including a full list of included terms and their significance tests.
When I was completing my Master’s degree I was a sessional assistant for an introductory quantitative methods course for economics and finance majors. The type of simple linear regression would be considered overly simplistic at that level (at least in the absence of some simple specification testing), and if the j curve is already accepted in medicine, to model linearly is unforgivable. It’s not like non-linear transformations are hard to do either, you can do them in Excel without too much trouble.
FWIW, I’m of the impression that economists get a better grounding in quantitative methods than other social scientists (and I would say that the profession is a bit too keen on mathematical approaches in some cases), so maybe you would have similar problems with psychology or sociology. But I don’t think economics has this problem.
Also, maybe more importantly, less in the way of financial and ideological commitment.
(Incidentally, my impression is that theoretical debates are more intense in physics than medicine, though I don’t know much about such theoretical debates as might exist in medicine.)