I will be generally skeptical here. If someone wants to coin a new term, or gain attention by accusing people of being systematically wrong in a new way, they’d better back it up.
the phenomenon where people tend to believe that a medical treatment would work if it has a plausible sounding mechanism of action, even when a randomized control trial would likely fail to demonstrate its effectiveness
This is an odd way of putting it. If I understand the scenario correctly, the trial has not yet been run, and person doesn’t know that “a randomized controlled trial would likely fail to demonstrate its effectiveness”—in fact, they expect the opposite. One might rephrase it as “tend to believe [...] even though the treatment probably won’t work”. This then raises the question of why you think it probably won’t work. Which you answer:
The human body is really complicated and most treatments don’t do anything, so a good explanation for how a treatment works is not enough; you have to put it to the test.
Ok, so, it’s an empirical fact that we’ve investigated many proposed treatments in the past with explanations as plausible-sounding as this hypothetical treatment, and the majority of them have proven ineffective? That’s good to know, and probably surprising to me. It seems that “being ignorant of this empirical fact” is an excellent explanation for people having the wrong expectations here. I would change the statement to this:
the phenomenon where people are ignorant of the fact—and have expectations counter to it—that many proposed treatments with equally plausible-sounding mechanisms have been tested and the majority proven not to work
My next question is “What people are we talking about here?” If they were investors deciding whether to fund biotech startups, or whoever’s in charge of approving research grants, then, yeah, it would be surprising if they hadn’t learned such important facts about medical research.
If these are random civilians… then I don’t think it’s very surprising. At what point would it be taught to them? Would it come up in their lives? On the first question, to the extent normal people are taught anything about medicine, it seems to be primarily “there are certified medical professionals, they know more than you, you should shut up and listen to them and not attempt to think for yourself”; in keeping with this, any medically proven drugs they do interact with are presented as fait accompli; if they do ask how the drug works, I expect they’re given a few sentences of plausible-sounding explanation.
Which then explains their expectations: (a) “plausible-sounding explanation from medical person” = “working drug” in their experience, (b) they’re probably never told much about all the drugs that failed in the past (maybe they hear about thalidomide, but that was “toxic side effects” rather than “ineffectual”). Under what circumstances would they learn about the zillions of drugs that turn out ineffectual? Maybe there are news stories about promising drug candidates, but I suspect follow-up stories that say “it didn’t work” get less attention (and are therefore often not written).
This brings us to the “Would it come up in their lives?” question, and, glancing at the timestamps in the video… it probably is indeed talking about civilians, and pseudoscience and homeopathy. Ah. Ok, well… Perhaps it would be worth teaching people about the above empirical fact about medical research, so they’re less vulnerable to those peddling pseudo-medicine. Sounds fine, I’m in favor of teaching people more facts if they’re true. I have no hope of being able to mold today’s schools in any desirable direction, but you are welcome to try.
It seems like there is an additional claim that might be summarized as “people have a too-high prior expectation that we can come up with a simple plan to do a thing and it’ll work without anything going severely wrong”. Which… maybe. I would say:
There are plenty of fields in which a human who says “I have a plan to do thing X I haven’t specifically done before; it should consist of these simple steps A B C” is in fact >90% likely able to execute X. I’d say this is true for programming, for example. “The planning fallacy” refers to the fact that they’ve usually forgotten several intermediate substeps and didn’t know about several more, so X will take longer than expected, but they still manage to do it.
In much of society, we deliberately construct things to be mechanically simple. I mean, it’s a tautology that the houses, cars, working drugs, machines, etc. that we use must be simple enough for humans to have designed and built, and generally they’re simple enough to use and maintain as well (sometimes requiring expert help). So one’s experience of the world, and lots of entire careers, are in domains where you can in fact expect things to behave relatively simply.
There are probably fields where naive people make the opposite error—they don’t realize how completely and thoroughly people have worked out how to do xyz and troubleshoot all the problems people have encountered along the way, and they fail to ask for help or try to google for solutions.
I guess one could say that medicine is a field where you unavoidably have to interact with an extremely complex thing (biology), and we do it anyway because the rewards of getting it right are so high.
I also worry that the claim is more along the lines of “people have a too-high prior expectation that the world is comprehensible, that science can be done and is worthwhile”. Because I’d be very, very suspicious of anyone trying to push a claim like that without a lot of qualifiers.
I really like this idea and believe it may translate well into other fields. For example, people may be too eager to believe in theoretical political structures when, if they were put into practice, would likely fail for reasons we can’t predict because human society is really complicated and hard to model.
I agree that people often have terrible ideas (and terribly unjustified overconfidence in those ideas) about government, and that this is a major problem, and that “human society is really complicated and hard to model” is part of the explanation here.
But for government, there are additional problems that seem possibly more important. You’re interacting with a system that contains intelligent beings—some possibly more intelligent than you as individuals, and market-like organizations that are almost certainly way smarter than you in aggregate—with different goals than you, in some cases actively opposed to you. If we look up unintended consequences on Wikipedia, specifically the perverse results section… we have the advantage of hindsight, of course, but many of them look like very straightforward cases of “regulator imposes a rule on some rubes, the rubes respond in the obvious way to the new incentives, and the regulator is presumably surprised and outraged”.
Why does this keep happening? Perhaps the regulators really are that incompetent at regulating; I can think of reasons to expect this (typical mind fallacy, having contempt for the rubes, not having “security mindset”, not thinking they or their advisors should put serious effort into considering the rubes’ incentives and their options). Also perhaps principal-agent issues—did the regulators who made these blunders get punished for it? Perhaps their actual job is getting reelected and/or promoted, and the effectiveness of the regulations is irrelevant to it (and they can claim that the fault lies with the real villains who e.g. mislabeled orphans as mentally ill to receive more funding). The latter would explain the former: if they don’t need to be competent at regulating, why would they be? I suspect that’s how it is.[1] “The interests of the rulers aren’t aligned with those of the people” is ultimately the biggest problem, in my view.
And if we move from people whose job nominally is regulating, to ordinary people who don’t rule over anything except maybe (arguably) their own children… well, there’s even less reason to expect people to have optimized their ability to predict the results of hypothetical national-scale policies. Casual political discussion is an area where there are lots of applause lights and people importing opinions without very critical examination from their party. They might end up with good information about specific issues (or at least the subset of that information that their party wants to talk about), but that seems unlikely to translate into good prediction about hypothetical policies in general.
If naive people’s intuitions about a field are wrong—people who’ve never studied the field nor intend to work in it—this doesn’t strike me as particularly noteworthy. If it matters that their intuitions are wrong, because the decisions are being made by rank amateurs (or those who aren’t even trying) who either don’t realize their ignorance or don’t care, then that is a problem.
Anyway, in the human body… You could sort of metaphorically say that your cells are intelligent (they embody eons of evolutionary experience), and the interests of individual cells aren’t necessarily aligned to yours (cancer), and cells will reject your attempts to interfere and fight back against you (blood-brain barrier and stuff; the immune system; autoimmune problems). But I think it’s pretty different. The drug treatments fail because—I don’t know, but I take it it’s something like “there are 30,000 nearby chemical processes, most of which we aren’t aware of, at least one of which ended up interfering with our drug”. Those processes already existed; the human body isn’t going to invent new ones for the purpose of foiling a new drug (except possibly an irate immune system inventing antibodies). It’s “take a step and hope you didn’t squash any of the thousands of toes you can’t see”, versus “security mindset: your law is providing these smart people a $1 million incentive to find a loophole”.
But perhaps, for every case like this, there were ten other regulations with equally plausible failure modes that didn’t end up happening for obscure reasons. I dunno. I’m not sure how someone would do a comprehensive survey on regulations and evaluate them in this way, but it might be interesting. (One should also weight by the severity of the failure; it’s possible that a 10% failure rate would be unacceptably high.) There’s plenty more I could say, but it would be offtopic.
I will be generally skeptical here. If someone wants to coin a new term, or gain attention by accusing people of being systematically wrong in a new way, they’d better back it up.
This is an odd way of putting it. If I understand the scenario correctly, the trial has not yet been run, and person doesn’t know that “a randomized controlled trial would likely fail to demonstrate its effectiveness”—in fact, they expect the opposite. One might rephrase it as “tend to believe [...] even though the treatment probably won’t work”. This then raises the question of why you think it probably won’t work. Which you answer:
Ok, so, it’s an empirical fact that we’ve investigated many proposed treatments in the past with explanations as plausible-sounding as this hypothetical treatment, and the majority of them have proven ineffective? That’s good to know, and probably surprising to me. It seems that “being ignorant of this empirical fact” is an excellent explanation for people having the wrong expectations here. I would change the statement to this:
My next question is “What people are we talking about here?” If they were investors deciding whether to fund biotech startups, or whoever’s in charge of approving research grants, then, yeah, it would be surprising if they hadn’t learned such important facts about medical research.
If these are random civilians… then I don’t think it’s very surprising. At what point would it be taught to them? Would it come up in their lives? On the first question, to the extent normal people are taught anything about medicine, it seems to be primarily “there are certified medical professionals, they know more than you, you should shut up and listen to them and not attempt to think for yourself”; in keeping with this, any medically proven drugs they do interact with are presented as fait accompli; if they do ask how the drug works, I expect they’re given a few sentences of plausible-sounding explanation.
Which then explains their expectations: (a) “plausible-sounding explanation from medical person” = “working drug” in their experience, (b) they’re probably never told much about all the drugs that failed in the past (maybe they hear about thalidomide, but that was “toxic side effects” rather than “ineffectual”). Under what circumstances would they learn about the zillions of drugs that turn out ineffectual? Maybe there are news stories about promising drug candidates, but I suspect follow-up stories that say “it didn’t work” get less attention (and are therefore often not written).
This brings us to the “Would it come up in their lives?” question, and, glancing at the timestamps in the video… it probably is indeed talking about civilians, and pseudoscience and homeopathy. Ah. Ok, well… Perhaps it would be worth teaching people about the above empirical fact about medical research, so they’re less vulnerable to those peddling pseudo-medicine. Sounds fine, I’m in favor of teaching people more facts if they’re true. I have no hope of being able to mold today’s schools in any desirable direction, but you are welcome to try.
It seems like there is an additional claim that might be summarized as “people have a too-high prior expectation that we can come up with a simple plan to do a thing and it’ll work without anything going severely wrong”. Which… maybe. I would say:
There are plenty of fields in which a human who says “I have a plan to do thing X I haven’t specifically done before; it should consist of these simple steps A B C” is in fact >90% likely able to execute X. I’d say this is true for programming, for example. “The planning fallacy” refers to the fact that they’ve usually forgotten several intermediate substeps and didn’t know about several more, so X will take longer than expected, but they still manage to do it.
In much of society, we deliberately construct things to be mechanically simple. I mean, it’s a tautology that the houses, cars, working drugs, machines, etc. that we use must be simple enough for humans to have designed and built, and generally they’re simple enough to use and maintain as well (sometimes requiring expert help). So one’s experience of the world, and lots of entire careers, are in domains where you can in fact expect things to behave relatively simply.
There are probably fields where naive people make the opposite error—they don’t realize how completely and thoroughly people have worked out how to do xyz and troubleshoot all the problems people have encountered along the way, and they fail to ask for help or try to google for solutions.
I guess one could say that medicine is a field where you unavoidably have to interact with an extremely complex thing (biology), and we do it anyway because the rewards of getting it right are so high.
I also worry that the claim is more along the lines of “people have a too-high prior expectation that the world is comprehensible, that science can be done and is worthwhile”. Because I’d be very, very suspicious of anyone trying to push a claim like that without a lot of qualifiers.
I agree that people often have terrible ideas (and terribly unjustified overconfidence in those ideas) about government, and that this is a major problem, and that “human society is really complicated and hard to model” is part of the explanation here.
But for government, there are additional problems that seem possibly more important. You’re interacting with a system that contains intelligent beings—some possibly more intelligent than you as individuals, and market-like organizations that are almost certainly way smarter than you in aggregate—with different goals than you, in some cases actively opposed to you. If we look up unintended consequences on Wikipedia, specifically the perverse results section… we have the advantage of hindsight, of course, but many of them look like very straightforward cases of “regulator imposes a rule on some rubes, the rubes respond in the obvious way to the new incentives, and the regulator is presumably surprised and outraged”.
Why does this keep happening? Perhaps the regulators really are that incompetent at regulating; I can think of reasons to expect this (typical mind fallacy, having contempt for the rubes, not having “security mindset”, not thinking they or their advisors should put serious effort into considering the rubes’ incentives and their options). Also perhaps principal-agent issues—did the regulators who made these blunders get punished for it? Perhaps their actual job is getting reelected and/or promoted, and the effectiveness of the regulations is irrelevant to it (and they can claim that the fault lies with the real villains who e.g. mislabeled orphans as mentally ill to receive more funding). The latter would explain the former: if they don’t need to be competent at regulating, why would they be? I suspect that’s how it is.[1] “The interests of the rulers aren’t aligned with those of the people” is ultimately the biggest problem, in my view.
And if we move from people whose job nominally is regulating, to ordinary people who don’t rule over anything except maybe (arguably) their own children… well, there’s even less reason to expect people to have optimized their ability to predict the results of hypothetical national-scale policies. Casual political discussion is an area where there are lots of applause lights and people importing opinions without very critical examination from their party. They might end up with good information about specific issues (or at least the subset of that information that their party wants to talk about), but that seems unlikely to translate into good prediction about hypothetical policies in general.
If naive people’s intuitions about a field are wrong—people who’ve never studied the field nor intend to work in it—this doesn’t strike me as particularly noteworthy. If it matters that their intuitions are wrong, because the decisions are being made by rank amateurs (or those who aren’t even trying) who either don’t realize their ignorance or don’t care, then that is a problem.
Anyway, in the human body… You could sort of metaphorically say that your cells are intelligent (they embody eons of evolutionary experience), and the interests of individual cells aren’t necessarily aligned to yours (cancer), and cells will reject your attempts to interfere and fight back against you (blood-brain barrier and stuff; the immune system; autoimmune problems). But I think it’s pretty different. The drug treatments fail because—I don’t know, but I take it it’s something like “there are 30,000 nearby chemical processes, most of which we aren’t aware of, at least one of which ended up interfering with our drug”. Those processes already existed; the human body isn’t going to invent new ones for the purpose of foiling a new drug (except possibly an irate immune system inventing antibodies). It’s “take a step and hope you didn’t squash any of the thousands of toes you can’t see”, versus “security mindset: your law is providing these smart people a $1 million incentive to find a loophole”.
But perhaps, for every case like this, there were ten other regulations with equally plausible failure modes that didn’t end up happening for obscure reasons. I dunno. I’m not sure how someone would do a comprehensive survey on regulations and evaluate them in this way, but it might be interesting. (One should also weight by the severity of the failure; it’s possible that a 10% failure rate would be unacceptably high.) There’s plenty more I could say, but it would be offtopic.