Maybe I missed something, or maybe it’s simply that the study of history portrayed to us laypeople is usually so qualitative, but this just sounds like a call to apply quantitative model building and testing to the study of history. With some choice word replacements, you could get the post to sound like basic statistical modeling.
In a lot of my forecasts about the future, I don’t actually use quantitative modeling at all. In fact, the best forecasters are those who rely on such models, but who make forecasts that are ultimately based on their judgment.
If anything, calling for quantitative modeling to be used can easily result in a kind of “scientism”. Cliodynamics is actually a good example of that. I would instead recommend taking forecasters who have a good track record when making predictions about the future and have them do retrospective forecasting through whatever means they deem appropriate, for example.
I want to make a pitch for the usual historical analysis though—the need for quantitative modeling to judge the prowess of specific historical figures, for instance, comes mainly from a lack of knowledge of what contemporaries thought since they would presumably be the best judges. But that same lack of primary sources will often be accompanied by a lack of quantitative data. Not always, which is why both have a place in studying history! In particular, if record-keeping makes these available, but the only usable primary evaluations are obviously biased, resorting to the quantitative data may help. But these models have the same concerns of all other statistical models—validity (are your measures accurate and picking up the construct you think?), endogeneity (the actors certainly interact with their environment, so causal estimates of the affect of one variable on others may be hard to ascertain), omitted variables bias, and generalizability (probably the most critical, since time marches ever forward and a model that allows us to evaluate Napoleon might not be the right model to evaluate Lincoln...but we need multiple observations and will have to select our supposedly relevant population extremely carefully).
I don’t know what you’re talking about here. When I talk about “models” in the post, these “models” could just be heuristics in your head, inexplicit intuitions, et cetera. I never called for using statistical modeling to study history and I think excessive reliance on such models at the expense of your judgment is actually a mistake.
You must have misunderstood what I was trying to say in my post for you to make a comment that’s so orthogonal to the point I tried to make, and I think that’s my fault for not being sufficiently clear.
Yes I misunderstood your post. I appreciate your taking some responsibility as the communicator (e.g., probabilities and likelihoods are pretty quant!), but your post could have also been reasonably read as referring to inexplicit models, and that is on me. Communication breakdowns are rarely on one party alone.
I agree that cliodynamics has been a dicey application of quant modeling to history—the valuable parts of it are generally in the inexplicit modeling rather than the real quant model per se. Inexplicit forecasting is more common, but it’s also less testable (anything but the most extreme falsification fits!) and then again not really all that different from what historians already do. The status quo in history is inexplicit modeling in expert judgment, so I’m not sure that relabeling it or asking historians to think less-inexplicitly-but-not-quite-explicitly will do much to move the field.
Qualitative work is not fated to fall into “just-so” stories, and neither is quantitative work destined to be “scientism.” The key is understanding the internal and external validity of your research.
In a lot of my forecasts about the future, I don’t actually use quantitative modeling at all. In fact, the best forecasters are those who rely on such models, but who make forecasts that are ultimately based on their judgment.
If anything, calling for quantitative modeling to be used can easily result in a kind of “scientism”. Cliodynamics is actually a good example of that. I would instead recommend taking forecasters who have a good track record when making predictions about the future and have them do retrospective forecasting through whatever means they deem appropriate, for example.
I don’t know what you’re talking about here. When I talk about “models” in the post, these “models” could just be heuristics in your head, inexplicit intuitions, et cetera. I never called for using statistical modeling to study history and I think excessive reliance on such models at the expense of your judgment is actually a mistake.
You must have misunderstood what I was trying to say in my post for you to make a comment that’s so orthogonal to the point I tried to make, and I think that’s my fault for not being sufficiently clear.
Yes I misunderstood your post. I appreciate your taking some responsibility as the communicator (e.g., probabilities and likelihoods are pretty quant!), but your post could have also been reasonably read as referring to inexplicit models, and that is on me. Communication breakdowns are rarely on one party alone.
I agree that cliodynamics has been a dicey application of quant modeling to history—the valuable parts of it are generally in the inexplicit modeling rather than the real quant model per se. Inexplicit forecasting is more common, but it’s also less testable (anything but the most extreme falsification fits!) and then again not really all that different from what historians already do. The status quo in history is inexplicit modeling in expert judgment, so I’m not sure that relabeling it or asking historians to think less-inexplicitly-but-not-quite-explicitly will do much to move the field.
Qualitative work is not fated to fall into “just-so” stories, and neither is quantitative work destined to be “scientism.” The key is understanding the internal and external validity of your research.