For radical skeptics, though, there is a deeper lesson: the impossibility of picking the influential acorns before the fact. Joel Mokyr compares searching for the seeds of the Industrial Revolution to “studying the history of Jewish dissenters between 50 A.D. and 50 B.C. We are looking for something that at its inception was insignificant, even bizarre, but destined to change the life of every man and woman in the West.”
And:
We often want to know why a particular consequence — be it a genocidal bloodbath or financial implosion — happened when and how it did. Examination of the record identifies a host of contributory causes. In the [crash of Western flight 903 in 1979], five factors loom. It is tempting to view each factor by itself as a necessary cause. But the temptation should be resisted. Do we really believe that the crash could not have occurred in the wake of other antecedents? It is also tempting to view the five causes as jointly sufficient. But believing this requires endorsing the equally far-fetched counterfactual that, had something else happened, such as a slightly different location for the truck, the crash would still have occurred.
Exploring these what-if possibilities might seem a gratuitous reminder to families of victims of how unnecessary the deaths were. But the exercise is essential for appreciating why the contributory causes of one accident do not permit the NTSB to predict plane crashes in general. Pilots are often tired; bad weather and cryptic communication are common; radio communication sometimes breaks down; and people facing death frequently panic. The NTSB can pick out, post hoc, the ad hoc combination of causes of any disaster. They can, in this sense, explain the past. But they cannot predict the future. The only generalization that we can extract from airplane accidents may be that, absent sabotage, crashes are the result of a confluence of improbable events compressed into a few terrifying moments.
If a statistician were to conduct a prospective study of how well retrospectively identified causes, either singly or in combination, predict plane crashes, our measure of predictability— say, a squared multiple correlation coefficient— would reveal gross unpredictability. Radical skeptics tell us to expect the same fate for our quantitative models of wars, revolutions, elections, and currency crises. Retrodiction is enormously easier than prediction.
And:
Political observers run the same risk when they look for patterns in random concatenations of events. They would do better by thinking less. When we know the base rates of possible outcomes— say, the incumbent wins 80 percent of the time— and not much else, we should simply predict the more common outcome. But work on base rate neglect suggests that people often insist on attaching high probabilities to low-frequency events. These probabilities are rooted not in observations of relative frequency in relevant reference populations of cases, but rather in case-specific hunches about causality that make some scenarios more “imaginable” than others. A plausible story of how a government might suddenly collapse counts for far more than how often similar outcomes have occurred in the past. Forecasting accuracy suffers when intuitive causal reasoning trumps extensional probabilistic reasoning.
Psychological skeptics are also not surprised when people draw strong lessons from brief runs of forecasting failures or successes. Winning forecasters are often skilled at concocting elaborate stories about why fortune favored their point of view. Academics can quickly spot the speciousness of these stories when the forecaster attributes her success to a divinity heeding a prayer or to planets being in the correct alignment. But even these observers can be gulled if the forecaster invokes an explanation in intellectual vogue.
More (#1) from Expert Political Judgment:
And:
And: