Pandemic Prediction Checklist: H5N1
Pandemic Prediction Checklist: Monkeypox
Correlation may imply some sort of causal link.
For guessing its direction, simple models help you think.
Controlled experiments, if they are well beyond the brink
Of .05 significance will make your unknowns shrink.
Replications show there’s something new under the sun.
Did one cause the other? Did the other cause the one?
Are they both controlled by what has already begun?
Or was it their coincidence that caused it to be done?
In October, PredictIt and PredictWise had Clinton at 83 and 91 cents respectively.
The night before the 2016 election, WCNC published the next-day forecasts by the NY Times, 538 (when Nate Silver was still running it) and the Huffington Post, which gave odds of Clinton winning of 84%, 68%, and 98% respectively.
The Princeton Election Consortium‘s model developed by data scientist Sam Wang forecasted 99% for Clinton. Wang ate a cricket on live TV.
Reuters/Ipsos forecasted 90% Clinton.
Overall, seems like prediction markets were in the same ballbark of wrongness as the media forecasts. Admittedly, the forecast dates here are not identical—a more rigorous breakdown would be welcome.
But in this bunch, the best result was obtained by a professional data scientists, Nate Silver, not by a prediction market.
There’s been a huge amount of discourse around the failed 2016 forecasts, and they all attribute it to failure to take correlated polling errors into account (which Silver did model, explaining his less wrong prediction). There might have been underlying partisan bias or conformity warping modeling decisions, but those biases also exist in prediction markets. Money and reputation is on the line for media outlets publishing high profile quantitative forecasts.
So overall, I don’t think the 2016 presidential election forecast is a great example of PMs raising the sanity waterline.