I always see [rates of compliance, lockdown fatigue, which kinds of restrictions are actually followed, etc.] discussed in very qualitative, intuitive terms. We talk of cases, tests, fatality rates, and reproduction numbers quantitatively. We look at tables and charts of these numbers, we compare projections of them.
But when the conversation turns to lockdown compliance, the numbers vanish, the claims range over broad and poorly specified groups (instead of percentages and confidence intervals we get phrases like “most people,” or merely “people”), and everything is (as far as I can tell) based on gut feeling.
Even a simple toy model could help, by separating intuitions about the mechanism from those about outcomes. If someone argues that a number will be 1000x or 0.001x the value the toy model would predict, that suggests either
(a) the number is wrong or
(b) the toy model missed some important factor with a huge influence over the conclusions one draws
Either (a) or (b) would be interesting to learn.
----
One basic question I don’t feel I have the answer to: do we know anything about how powerful the control system is?
As long as this trend holds, it’s like we’re watching the temperature of my room when I’ve got the thermostat set to 70F. Sure enough, the temperature stays close to 70F.
This tells you nothing about the maximum power of my heating system. In colder temperatures, it’d need to work harder, and at some low enough temperature T, it wouldn’t be able to sustain 70F inside. But we can’t tell what that cutoff T is until we reach it. “The indoor temperature right now oscillates around 70F” doesn’t tell you anything about T.
Doesn’t this argument work just as well for the “control system”? A toy model could answer that question.
Many of the same thoughts were in my mind when I linked when I linked that study on the previous post.
----
IMO, it would help clarify arguments about the “control system” a lot to write down the ideas in some quantitative form.
...
This tells you nothing about the maximum power of my heating system. In colder temperatures, it’d need to work harder, and at some low enough temperature T, it wouldn’t be able to sustain 70F inside. But we can’t tell what that cutoff T is until we reach it. “The indoor temperature right now oscillates around 70F” doesn’t tell you anything about T.
I agree, and in fact the main point I was getting at with my initial comment is that in the two areas I talked about—namely the control system and the overall explanation for failure, there’s an unfortunate tendency to toss out quantitative arguments or even detailed models of the world and instead resort to intuitions and qualitative arguments—and then it has a tendency to turn into a referendum on your personal opinions about human nature and the human condition, which isn’t that useful for predicting anything. You can see this in how the predictions panned out—as was pointed out by some anonymous commenter, control system ‘running out of power’ arguments generally haven’t been that predictively accurate when it comes to these questions.
The rule-of-thumb that I’ve used—the Morituri Nolumus Mori effect—has fared somewhat better than the ‘control system will run out of steam sooner or later’ rule-of-thumb, both when I wrote that post and since. The MNM tends to predict last-minute myopic decisions that mostly avoid the worst outcomes, while the ‘out of steam’ explanation led people to predict that social distancing would mostly be over by now. But neither is a proper quantitative model.
As to the second question—overall explanation, there is some data to work off of, but not much. We know that preexisting measures of state capacity don’t predict covid response effectiveness, which along with other evidence suggests the ‘institutional schlerosis’ hypothesis I referred to in my original post. Once again, I think that a clear mechanism - ‘institutional sclerosis as part of the great stagnation’ - is a much better starting point for unravelling all this than the ‘simulacra levels are higher now’ perspective that I see a lot around here. That claim is too abstract to easily falsify or derive genuine in-advance predictions.
Many of the same thoughts were in my mind when I linked when I linked that study on the previous post.
----
IMO, it would help clarify arguments about the “control system” a lot to write down the ideas in some quantitative form.
As I wrote here:
Even a simple toy model could help, by separating intuitions about the mechanism from those about outcomes. If someone argues that a number will be 1000x or 0.001x the value the toy model would predict, that suggests either
(a) the number is wrong or
(b) the toy model missed some important factor with a huge influence over the conclusions one draws
Either (a) or (b) would be interesting to learn.
----
One basic question I don’t feel I have the answer to: do we know anything about how powerful the control system is?
Roughly, “the control system” is an explanation for the fact that R stays very close to 1 in many areas. It oscillates up and down, but it never gets anywhere near as low as 0, or anywhere near as high as the uncontrolled value of ~4.5.
As long as this trend holds, it’s like we’re watching the temperature of my room when I’ve got the thermostat set to 70F. Sure enough, the temperature stays close to 70F.
This tells you nothing about the maximum power of my heating system. In colder temperatures, it’d need to work harder, and at some low enough temperature T, it wouldn’t be able to sustain 70F inside. But we can’t tell what that cutoff T is until we reach it. “The indoor temperature right now oscillates around 70F” doesn’t tell you anything about T.
Doesn’t this argument work just as well for the “control system”? A toy model could answer that question.
I agree, and in fact the main point I was getting at with my initial comment is that in the two areas I talked about—namely the control system and the overall explanation for failure, there’s an unfortunate tendency to toss out quantitative arguments or even detailed models of the world and instead resort to intuitions and qualitative arguments—and then it has a tendency to turn into a referendum on your personal opinions about human nature and the human condition, which isn’t that useful for predicting anything. You can see this in how the predictions panned out—as was pointed out by some anonymous commenter, control system ‘running out of power’ arguments generally haven’t been that predictively accurate when it comes to these questions.
The rule-of-thumb that I’ve used—the Morituri Nolumus Mori effect—has fared somewhat better than the ‘control system will run out of steam sooner or later’ rule-of-thumb, both when I wrote that post and since. The MNM tends to predict last-minute myopic decisions that mostly avoid the worst outcomes, while the ‘out of steam’ explanation led people to predict that social distancing would mostly be over by now. But neither is a proper quantitative model.
In terms of actually giving some quantitative rigour to this question—it’s not easy. I made an effort in my old post, by saying how far a society can stray from a control system equilibrium is indicated by how low they managed to get Rt—but the ‘gold standard’ is to just work off model projections trained on already existing data like I tried to do.
As to the second question—overall explanation, there is some data to work off of, but not much. We know that preexisting measures of state capacity don’t predict covid response effectiveness, which along with other evidence suggests the ‘institutional schlerosis’ hypothesis I referred to in my original post. Once again, I think that a clear mechanism - ‘institutional sclerosis as part of the great stagnation’ - is a much better starting point for unravelling all this than the ‘simulacra levels are higher now’ perspective that I see a lot around here. That claim is too abstract to easily falsify or derive genuine in-advance predictions.