Planning Fallacy

The Den­ver In­ter­na­tional Air­port opened 16 months late, at a cost over­run of $2 billion.1

The Eurofighter Typhoon, a joint defense pro­ject of sev­eral Euro­pean coun­tries, was de­liv­ered 54 months late at a cost of $19 billion in­stead of $7 billion.

The Syd­ney Opera House may be the most leg­endary con­struc­tion over­run of all time, origi­nally es­ti­mated to be com­pleted in 1963 for $7 mil­lion, and fi­nally com­pleted in 1973 for $102 mil­lion.2

Are these iso­lated dis­asters brought to our at­ten­tion by se­lec­tive availa­bil­ity? Are they symp­toms of bu­reau­cracy or gov­ern­ment in­cen­tive failures? Yes, very prob­a­bly. But there’s also a cor­re­spond­ing cog­ni­tive bias, repli­cated in ex­per­i­ments with in­di­vi­d­ual plan­ners.

Buehler et al. asked their stu­dents for es­ti­mates of when they (the stu­dents) thought they would com­plete their per­sonal aca­demic pro­jects.3 Speci­fi­cally, the re­searchers asked for es­ti­mated times by which the stu­dents thought it was 50%, 75%, and 99% prob­a­ble their per­sonal pro­jects would be done. Would you care to guess how many stu­dents finished on or be­fore their es­ti­mated 50%, 75%, and 99% prob­a­bil­ity lev­els?

  • 13% of sub­jects finished their pro­ject by the time they had as­signed a 50% prob­a­bil­ity level;

  • 19% finished by the time as­signed a 75% prob­a­bil­ity level;

  • and only 45% (less than half!) finished by the time of their 99% prob­a­bil­ity level.

As Buehler et al. wrote, “The re­sults for the 99% prob­a­bil­ity level are es­pe­cially strik­ing: Even when asked to make a highly con­ser­va­tive fore­cast, a pre­dic­tion that they felt vir­tu­ally cer­tain that they would fulfill, stu­dents’ con­fi­dence in their time es­ti­mates far ex­ceeded their ac­com­plish­ments.”4

More gen­er­ally, this phe­nomenon is known as the “plan­ning fal­lacy.” The plan­ning fal­lacy is that peo­ple think they can plan, ha ha.

A clue to the un­der­ly­ing prob­lem with the plan­ning al­gorithm was un­cov­ered by Newby-Clark et al., who found that

  • Ask­ing sub­jects for their pre­dic­tions based on re­al­is­tic “best guess” sce­nar­ios; and

  • Ask­ing sub­jects for their hoped-for “best case” sce­nar­ios . . .

. . . pro­duced in­dis­t­in­guish­able re­sults.5

When peo­ple are asked for a “re­al­is­tic” sce­nario, they en­vi­sion ev­ery­thing go­ing ex­actly as planned, with no un­ex­pected de­lays or un­fore­seen catas­tro­phes—the same vi­sion as their “best case.”

Real­ity, it turns out, usu­ally de­liv­ers re­sults some­what worse than the “worst case.”

Un­like most cog­ni­tive bi­ases, we know a good de­bi­as­ing heuris­tic for the plan­ning fal­lacy. It won’t work for messes on the scale of the Den­ver In­ter­na­tional Air­port, but it’ll work for a lot of per­sonal plan­ning, and even some small-scale or­ga­ni­za­tional stuff. Just use an “out­side view” in­stead of an “in­side view.”

Peo­ple tend to gen­er­ate their pre­dic­tions by think­ing about the par­tic­u­lar, unique fea­tures of the task at hand, and con­struct­ing a sce­nario for how they in­tend to com­plete the task—which is just what we usu­ally think of as plan­ning.

When you want to get some­thing done, you have to plan out where, when, how; figure out how much time and how much re­source is re­quired; vi­su­al­ize the steps from be­gin­ning to suc­cess­ful con­clu­sion. All this is the “in­side view,” and it doesn’t take into ac­count un­ex­pected de­lays and un­fore­seen catas­tro­phes. As we saw be­fore, ask­ing peo­ple to vi­su­al­ize the “worst case” still isn’t enough to coun­ter­act their op­ti­mism—they don’t vi­su­al­ize enough Mur­phy­ness.

The out­side view is when you de­liber­ately avoid think­ing about the spe­cial, unique fea­tures of this pro­ject, and just ask how long it took to finish broadly similar pro­jects in the past. This is coun­ter­in­tu­itive, since the in­side view has so much more de­tail—there’s a temp­ta­tion to think that a care­fully tai­lored pre­dic­tion, tak­ing into ac­count all available data, will give bet­ter re­sults.

But ex­per­i­ment has shown that the more de­tailed sub­jects’ vi­su­al­iza­tion, the more op­ti­mistic (and less ac­cu­rate) they be­come. Buehler et al. asked an ex­per­i­men­tal group of sub­jects to de­scribe highly spe­cific plans for their Christ­mas shop­ping—where, when, and how.6 On av­er­age, this group ex­pected to finish shop­ping more than a week be­fore Christ­mas. Another group was sim­ply asked when they ex­pected to finish their Christ­mas shop­ping, with an av­er­age re­sponse of four days. Both groups finished an av­er­age of three days be­fore Christ­mas.

Like­wise, Buehler et al., re­port­ing on a cross-cul­tural study, found that Ja­panese stu­dents ex­pected to finish their es­says ten days be­fore dead­line. They ac­tu­ally finished one day be­fore dead­line. Asked when they had pre­vi­ously com­pleted similar tasks, they re­sponded, “one day be­fore dead­line.” This is the power of the out­side view over the in­side view.

A similar find­ing is that ex­pe­rienced out­siders, who know less of the de­tails, but who have rele­vant mem­ory to draw upon, are of­ten much less op­ti­mistic and much more ac­cu­rate than the ac­tual plan­ners and im­ple­menters.

So there is a fairly re­li­able way to fix the plan­ning fal­lacy, if you’re do­ing some­thing broadly similar to a refer­ence class of pre­vi­ous pro­jects. Just ask how long similar pro­jects have taken in the past, with­out con­sid­er­ing any of the spe­cial prop­er­ties of this pro­ject. Bet­ter yet, ask an ex­pe­rienced out­sider how long similar pro­jects have taken.

You’ll get back an an­swer that sounds hideously long, and clearly re­flects no un­der­stand­ing of the spe­cial rea­sons why this par­tic­u­lar task will take less time. This an­swer is true. Deal with it.

1I’ve also seen $3.1 billion as­serted.

2Roger Buehler, Dale Griffin, and Michael Ross, “Ex­plor­ing the ‘Plan­ning Fal­lacy’: Why Peo­ple Un­der­es­ti­mate Their Task Com­ple­tion Times,” Jour­nal of Per­son­al­ity and So­cial Psy­chol­ogy 67, no. 3 (1994): 366–381.

3Roger Buehler, Dale Griffin, and Michael Ross, “It’s About Time: Op­ti­mistic Pre­dic­tions in Work and Love,” Euro­pean Re­view of So­cial Psy­chol­ogy 6, no. 1 (1995): 1–32.

4Roger Buehler, Dale Griffin, and Michael Ross, “In­side the Plan­ning Fal­lacy: The Causes and Con­se­quences of Op­ti­mistic Time Pre­dic­tions,” in Heuris­tics and Bi­ases: The Psy­chol­ogy of In­tu­itive Judg­ment, ed. Thomas Gilovich, Dale Griffin, and Daniel Kah­ne­man (New York: Cam­bridge Univer­sity Press, 2002), 250–270.

5Ian R. Newby-Clark et al., “Peo­ple Fo­cus on Op­ti­mistic Sce­nar­ios and Dis­re­gard Pes­simistic Sce­nar­ios While Pre­dict­ing Task Com­ple­tion Times,” Jour­nal of Ex­per­i­men­tal Psy­chol­ogy: Ap­plied 6, no. 3 (2000): 171–182.

6Buehler, Griffin, and Ross, “In­side the Plan­ning Fal­lacy.”