Important policies have so many effects that it is near impossible to keep track of them all. In addition, some effects tend to dwarf all others, so it is critical to catch every last one. (Perhaps they follow a Paretian distribution?) It follows that any quantitative analysis of policy effects tends to be seriously flawed.
I don’t think this is the right way to frame the problem.
It is true that even unimportant policies have so many effects that it is de-facto impossible to calculate them all. And it is true that one or a few effects tend to dwarf all others. But that does not mean that it’s critical to catch every last one. The effects which dwarf all others will typically be easier to notice, in some sense, precisely because they are big, dramatic, important effects. But “big/important effect” is not necessarily the same as “salient effect”, so in order for this work in practice, we have to go in looking for the big/important effects with open eyes rather than just asking the already-salient questions.
For instance, in the pot/IQ example, we can come at the problem from either “end”:
What things tend to be really important to humans, in the aggregate, and how does pot potentially impact those? Things like IQ, long-term health, monetary policy, technological development, countries coming out of poverty, etc, are “big things” in terms of what humans care about, so we should ask if pot potentially has predictable nontrivial effects on any of them.
On what things does pot have very large impact, and how much do we care? Pot probably has a big impact on things like recreational activity or how often people are sober. So, how do those things impact the things we care about most?
If people think about the problem in a principled way like this, then I expect they’ll come up with hypotheses like the pot-IQ thing. There just aren’t that many things which are highly important to humans in the aggregate, or that many things on which any given variable has a large expected effect. (Note the use of “expected effect”—variables may have lots of large effects via the butterfly effect, but that’s relevant to decision-making only insofar as we can predict the effects.)
The trick is that we have to think about the problem in a principled way from the start, not just get caught up in whatever questions other people have already brought to our attention.
johnswentworth makes the great point that “some effects tend to dwarf all others, so it is critical to catch every last one” assumes that we can’t identify the big effects early. If people are looking around with open eyes, they’re not so unable to pick up the relevant stuff first.
What yhoiseth’s framing gets right is that big effects are sometimes not salient, even for people with open eyes. And especially when effects are hard to directly observe or estimate with certainty because they’re indirect in nature (like substitution effects), not only are they low on salience for affective reasons, they’re low on salience because they don’t benefit from a “big is relevant” heuristic (or “open eyes”) since the effect size is unknown. That is, rather than effect size and salience having a positive correlation...because effects are often not known with certainty, even among people with open eyes, effect size and salience can have a negative correlation, necessitating getting to the bottom of the salience barrel to identify the big ones. I am unsure how the relative frequencies of “open eyes will see big effects” vs. “open eyes can still struggle to see big effects” compare though.
For example, Scott mentions side effects, addictiveness, pain relief, War on Drugs, high driving, and less drunk driving as effects. The idea is that the first four are rather small effects and the last two are rather large effects, but the error bars are small on the first four and huge on the last two. The first four are also highly salient (obvious, personalized, affectively-charged) despite being small, and the last two are not as salient despite being large (they would be salient if we knew their effect sizes; effect size matters for salience. But because we lack precision, we’re left with how salient these are on the basis of being non-obvious, de-personalized, and coldly econometric). If you were to plot these with effect size on X and salience on Y, you’d get a negative correlation if you were omniscient and able to include the last two effects in your dataset (per yhoiseth). But for Scott and typical discussants, the last two effects are missing data so you’re left with the weak positive correlation among the first four effects. At least until someone annoyingly but helpfully tells you it is time to go looking. But again, johnswentworth is also right that the actual correlation is frequently positive, so this isn’t always a problem.
Aside: it also, technically, depends on what counts as “open eyes.” I figure Scott and friends are pretty dialed in, so I take their “missing” the big effects as evidence for “open eyes can still struggle to see big effects.” But I suppose an economist whose spent their career studying substitution effects might think Scott et al. were approaching the problem with blindfolds on, duh, substitution is definitely the biggie here.
I don’t think this is the right way to frame the problem.
It is true that even unimportant policies have so many effects that it is de-facto impossible to calculate them all. And it is true that one or a few effects tend to dwarf all others. But that does not mean that it’s critical to catch every last one. The effects which dwarf all others will typically be easier to notice, in some sense, precisely because they are big, dramatic, important effects. But “big/important effect” is not necessarily the same as “salient effect”, so in order for this work in practice, we have to go in looking for the big/important effects with open eyes rather than just asking the already-salient questions.
For instance, in the pot/IQ example, we can come at the problem from either “end”:
What things tend to be really important to humans, in the aggregate, and how does pot potentially impact those? Things like IQ, long-term health, monetary policy, technological development, countries coming out of poverty, etc, are “big things” in terms of what humans care about, so we should ask if pot potentially has predictable nontrivial effects on any of them.
On what things does pot have very large impact, and how much do we care? Pot probably has a big impact on things like recreational activity or how often people are sober. So, how do those things impact the things we care about most?
If people think about the problem in a principled way like this, then I expect they’ll come up with hypotheses like the pot-IQ thing. There just aren’t that many things which are highly important to humans in the aggregate, or that many things on which any given variable has a large expected effect. (Note the use of “expected effect”—variables may have lots of large effects via the butterfly effect, but that’s relevant to decision-making only insofar as we can predict the effects.)
The trick is that we have to think about the problem in a principled way from the start, not just get caught up in whatever questions other people have already brought to our attention.
johnswentworth makes the great point that “some effects tend to dwarf all others, so it is critical to catch every last one” assumes that we can’t identify the big effects early. If people are looking around with open eyes, they’re not so unable to pick up the relevant stuff first.
What yhoiseth’s framing gets right is that big effects are sometimes not salient, even for people with open eyes. And especially when effects are hard to directly observe or estimate with certainty because they’re indirect in nature (like substitution effects), not only are they low on salience for affective reasons, they’re low on salience because they don’t benefit from a “big is relevant” heuristic (or “open eyes”) since the effect size is unknown. That is, rather than effect size and salience having a positive correlation...because effects are often not known with certainty, even among people with open eyes, effect size and salience can have a negative correlation, necessitating getting to the bottom of the salience barrel to identify the big ones. I am unsure how the relative frequencies of “open eyes will see big effects” vs. “open eyes can still struggle to see big effects” compare though.
For example, Scott mentions side effects, addictiveness, pain relief, War on Drugs, high driving, and less drunk driving as effects. The idea is that the first four are rather small effects and the last two are rather large effects, but the error bars are small on the first four and huge on the last two. The first four are also highly salient (obvious, personalized, affectively-charged) despite being small, and the last two are not as salient despite being large (they would be salient if we knew their effect sizes; effect size matters for salience. But because we lack precision, we’re left with how salient these are on the basis of being non-obvious, de-personalized, and coldly econometric). If you were to plot these with effect size on X and salience on Y, you’d get a negative correlation if you were omniscient and able to include the last two effects in your dataset (per yhoiseth). But for Scott and typical discussants, the last two effects are missing data so you’re left with the weak positive correlation among the first four effects. At least until someone annoyingly but helpfully tells you it is time to go looking. But again, johnswentworth is also right that the actual correlation is frequently positive, so this isn’t always a problem.
Aside: it also, technically, depends on what counts as “open eyes.” I figure Scott and friends are pretty dialed in, so I take their “missing” the big effects as evidence for “open eyes can still struggle to see big effects.” But I suppose an economist whose spent their career studying substitution effects might think Scott et al. were approaching the problem with blindfolds on, duh, substitution is definitely the biggie here.