Nassim Taleb on Election Forecasting

Nas­sim Taleb re­cently posted this math­e­mat­i­cal draft of elec­tion fore­cast­ing re­fine­ment to his Twit­ter.

The math isn’t su­per im­por­tant to see why it’s so cool. His model seems to be that we should try to fore­cast the elec­tion out­come, in­clud­ing un­cer­tainty be­tween now and the end date, rather than build a fore­cast that takes cur­rent poll num­bers and im­plic­itly as­sumes noth­ing changes.
The mechanism of his model fo­cuses on form­ing an un­bi­ased time-se­ries, for­mu­lated us­ing stochas­tic meth­ods. The main­stream meth­ods as of now fo­cus on multi-level Bayesian meth­ods that look to see how the elec­tion would turn out if it were run to­day.
That seems like it makes more sense. While it’s safe to as­sume a can­di­date will always want to have the high­est chances of win­ning, the pro­cess by which two can­di­dates in­ter­act is highly dy­namic and strate­gic with re­spect to the elec­tion date.
When you stop to think about it, it’s ac­tu­ally re­mark­able that elec­tions are so in­cred­ibly close to 50-50, with a 3-5% vic­tory be­ing gen­er­ally im­mense. It cap­tures this un­der­ly­ing dy­namic of poli­ti­cal game the­ory.

(At the more lo­cal level this isn’t always true, due to is­sues such as in­cum­bent ad­van­tage, lo­cal party dom­i­na­tion, strate­gic fund­ing choices, and var­i­ous other is­sues. The point though is that when those fric­tions are ame­lio­rated due to the im­por­tance of the pres­i­dency, we find our­selves in a sce­nario where the equil­ibrium tends to be elec­tions very close to 50-50.)

So back to the mechanism of the model, Taleb im­poses a no-ar­bi­trage con­di­tion (bor­rowed from op­tions pric­ing) to im­pose time-vary­ing con­sis­tency on the Brier score. This is a similar con­cept to fi­nan­cial op­tions, where you can go bankrupt or make money even be­fore the fi­nal event. In Taleb’s world, if a guy like Nate Silver is cre­at­ing fore­casts that are vary­ing largely over time prior to the elec­tion, this sug­gests he hasn’t put any time dy­namic con­straints on his model.

The math is based on as­sump­tions though that with high un­cer­tainty, far out from the elec­tion, the best fore­cast is 50-50. This set of as­sump­tions would have to be em­piri­cally tested. Still, step­ping aside from the math, it does feel in­tu­itive that an elec­tion fore­cast with high vari­a­tion a year away from the event is not worth rely­ing on, that stick­ing closer to 50-50 would offer a bet­ter full-sam­ple Brier score.


I’m not fa­mil­iar enough in the prac­ti­cal mod­el­ling to say whether this is fea­si­ble. Some­time the ideal mod­els are too hard to es­ti­mate.

I’m in­ter­ested in hear­ing any thoughts on this from peo­ple who are fa­mil­iar with fore­cast­ing or have an in­ter­est in the mod­el­ling be­hind it.

I also have a spe­cific ques­tion to tie this back to a ra­tio­nal­ity based frame­work: When you read Silver (or your preferred rep­utable elec­tion fore­caster, I like An­drew Gel­man) post their fore­casts prior to the elec­tion, do you ac­cept them as equal or bet­ter than any es­ti­mate you could come up with? Or do you do a men­tal ad­just­ment or dis­count­ing based on some fac­tor you think they’ve left out? Whether it’s pre­dic­tion mar­ket vari­a­tions, or ad­just­ments based on per­ceiv­ing changes in na­tion­al­ism or poli­ti­cian spe­cific skills (e.g. Scott Adams claimed to be able to pre­dict that Trump would per­suade ev­ery­one to vote for him. While it’s tempt­ing to write him off as a pun­dit char­latan, or claim he doesn’t have suffi­cient proof, we also can’t prove his model was wrong ei­ther.) I’m in­ter­ested in learn­ing the rea­sons we may dis­agree or be rea­son­ably skep­ti­cal of polls, know­ing it of course must be tested to know the true an­swer.

This is my first LW dis­cus­sion post—open to freed­back on how it could be improved