The Conceited Folly of Certainty

Over­con­fi­dence is stran­gling pub­lic dis­course and go­ing widely un­no­ticed.

“Blessed is the man, who hav­ing noth­ing to say, ab­stains from giv­ing wordy ev­i­dence of the fact.”—Ge­orge Eliot (AKA Mary Evans)

I.

Over two thou­sand years ago, there was a Greek painter named Apel­les. He climbed to no­to­ri­ety up ed­u­ca­tional rungs, lead­ing him from Eph­e­sus to Si­cyon, study­ing un­der fa­mous clas­si­cists Epho­rus and Pam­philius, re­spec­tively. While his­tory has failed to keep tabs on this enig­matic figure, his main legacy – his paint­ing of Alexan­der the Great notwith­stand­ing – comes from Nat­u­ralis His­to­ria, an en­cy­clo­pe­dic tome writ­ten by Pliny the Elder.

Pliny re­calls the story of a cob­bler cri­tiquing a poorly drawn shoe in one of Apel­les’ paint­ings. Mo­ti­vated by perfec­tion, Apel­les went back that very night and made the cor­rec­tions as ad­vised. The next morn­ing the cob­bler no­ticed the changes, and, in­spired by his im­pact on such a figure, be­gan to openly crit­i­cize the leg Apel­les had be­gun paint­ing to some on­look­ers. Emerg­ing from a hid­ing-place, Apel­les fa­mously re­sponded, Ne su­tor ul­tra crep­i­dam: “Shoe­maker, not be­yond the shoe.”

Two thou­sand years later, in 1819, es­say­ist William Ha­zlet wrote an open let­ter to William Gifford, then ed­i­tor of the Quar­terly Re­view. In the let­ter he ac­cused Gifford of be­ing “an Ul­tra-Crep­i­dar­ian critic,” draw­ing from the fa­mous Greek di­alogue above. Ul­tracrep­i­dar­i­anism is an ob­so­lete, es­o­teric word, but with the re­cent waves of faux ex­per­tise on­line and at rep­utable me­dia pub­li­ca­tions, it de­serves more ap­pre­ci­a­tion in the mod­ern lex­i­con.

The term refers to crit­ics that leap be­yond their known arena, offer­ing overzeal­ous ad­vice – an all too com­mon oc­cur­rence. Sud­denly, it seems so­ciety is filled with the voices of such ‘ex­perts’ who pos­sess solu­tions to all prob­lems. A mother who un­der­stands the le­gal limits of free speech. The un­cle who has dis­cov­ered op­ti­mal tax­a­tion policy to min­i­mize dead­weight loss. And the uni­ver­sity fresh­man who has usurped cli­mate sci­en­tists as the au­thor­ity figure on en­vi­ron­men­tal policy. Th­ese ex­am­ples seem con­trived, but do they not ring with the tune of fa­mil­iar­ity?

Walk into a so­cial en­gage­ment, and you will likely hear heated con­ver­sa­tions on press­ing poli­ti­cal is­sues. Univer­sity stu­dents dis­cussing racial equity and policy re­form, par­ents de­liv­er­ing ver­bal polemics about the cur­rent leader. Poli­tics has over­taken friendly dis­course in most set­tings, it’s so much of what we dis­cuss. This isn’t a nega­tive de­vel­op­ment, though. In fact, it’s im­por­tant to have an ed­u­cated pop­u­la­tion – one that is will­ing to dis­cuss such mat­ters openly. On the other hand, it is im­por­tant now more than ever to hold such solemn con­ver­sa­tions to a higher stan­dard as they have deep im­pacts.

The pre­sent dan­ger lies in the cer­tainty with which peo­ple be­lieve they are right on ev­ery is­sue and ve­he­mently op­pose (of­ten on a per­sonal level) any­one who is philo­soph­i­cally mis­al­igned.

So­cial me­dia is ripe for heated poli­ti­cal con­ver­sa­tions that es­tab­lish fac­tions for and against ev­ery post and com­ment made. Given the amount of time spent (or wasted) on these plat­forms, there seems to be no mo­ment of re­prieve – ar­gu­ing has be­come our de­fault set­ting. On­line, in­di­vi­d­u­als value pride over ethics, ex­ter­nal per­cep­tion over truth­ful­ness. Peo­ple are will­ing to post mis­lead­ing news and com­ment vit­ri­olic ad hominem at­tacks in last-ditch efforts to win over the imag­ined ‘un­de­cid­eds’ fol­low­ing a post.

There are some, how­ever, who do post out of a gen­uine de­sire for change and con­struc­tive di­alogue. They want to sand the rough edges of so­cietal rhetoric and policy to make it bet­ter, more durable. Hav­ing con­ver­sa­tions about policy de­ci­sions is im­por­tant to­wards this end. Pop­u­la­tions that ac­cept the sta­tus quo are more likely – whether through malev­olence or ig­no­rance – to ac­quiesce in in­jus­tice.

Com­mu­nal dis­cus­sions have a com­pound­ing effect, bols­ter­ing con­cern for cer­tain prac­tices in so­ciety. A con­ver­sa­tion be­tween friends causes on­look­ers to pon­der the same is­sues, and the chain re­ac­tion spreads rapidly through­out so­cial cliques. Th­ese in­ter­ac­tions, on­line and in-per­son, serve to in­form cit­i­zens and de­velop a cul­tural ethos of what is right and wrong.


Cap­i­tal­ism is a sys­tem that im­plic­itly re­wards spe­cial­iza­tion, which has led to post-sec­ondary ed­u­ca­tion for over 80% of young adults. This has sparked grow­ing gaps of knowl­edge be­tween the gen­er­al­ists with BAs – or any bach­e­lor’s de­gree – who read the New York Times ev­ery morn­ing and pupils that have spent years rigidly study­ing a spe­cific dis­ci­pline.

A grow­ing is­sue is the cer­tainty with which so­cietal views are com­mu­ni­cated.

While ev­ery­one is en­ti­tled to an opinion and should not be judged due to lack of ac­cred­i­ta­tion, cer­tain peo­ple are more likely to un­der­stand nu­anced is­sues. For ex­am­ple, even with a slight back­ground in statis­tics and aca­demic re­search, it might be hard for the av­er­age uni­ver­sity grad­u­ate to fully com­pre­hend dis­ci­pline-spe­cific re­search that is cited amid de­bate.

As will be dis­cussed, ex­perts tend to main­tain a marginal ad­van­tage over laypeo­ple in pre­dic­tive abil­ities within their dis­ci­pline. Their ex­per­tise, how­ever, does al­low for more thor­ough de­bate and un­der­stand­ing of cer­tain is­sues.

Yet, av­er­age Joes and Jills openly opine with ab­solute cer­tainty, pre­tend­ing that ev­ery policy de­ci­sion is not multi-faceted, hav­ing far-reach­ing im­pacts across so­ciety. Dur­ing this pan­demic, pop­u­lar figures with no back­ground in health sci­ences or epi­demiol­ogy seem to have all the rights an­swers. The ca­sual on­looker en­gages in viru­lent Face­book ar­gu­ments about lock­down poli­cies with­out re­gard for the im­pact this will leave on the most des­ti­tute, those who lack sav­ings and eco­nomic prospects. On the other side of this ar­gu­ment, many ne­glect the death toll and strain on health­care re­open­ing might en­tail.

What a se­lect few peo­ple – gen­er­ally, mod­er­ates, pro­gres­sives, and cen­trists – tend to ac­knowl­edge is that they can de­cide their opinion based on ev­i­dence they have en­coun­tered, but they aren’t sure that their opinion is the ab­solute best. As with all poli­cies, the best one is usu­ally a me­dian of ag­gre­gated opinions – ly­ing some­where in the mid­dle. Th­ese trust­wor­thy in­di­vi­d­u­als usu­ally pref­ace their opinions with, “I’m no ex­pert, but …” or, “It’s im­pos­si­ble to tell, how­ever, given what I’ve read …”

Yet again, that isn’t to say there shouldn’t be opinionated dis­cus­sion about press­ing is­sues. Rather, that any­time you hear or read some­one’s opinion on com­plex is­sues based on the read­ing of one ten­den­tious op-ed, con­sider Apel­les and the shoe­maker. Every cur­sory post un­der the guise of re­searched doc­trine should re­mind read­ers: “Paint not be­yond the shoe.” Know your limits.

II.

In 2018, three Euro­pean re­searchers pub­lished a pa­per in the Jour­nal of Ex­per­i­men­tal Psy­chol­ogy. Their goal was to iden­tify the per­sua­sive power of knowl­edge, test­ing a psy­cholog­i­cal con­cept known as the “con­fi­dence heuris­tic.” The re­search asked pairs of par­ti­ci­pants to iden­tify a crim­i­nal based on fa­cial recog­ni­tion. The groups would jointly iden­tify a per­pe­tra­tor from a lineup of sus­pect pho­tos the po­lice had sup­plied. The two mem­bers were then given differ­ent rough “com­pos­ites” (a pic­ture vaguely re­sem­bling the per­pe­tra­tor) and asked to col­lab­o­rate in mak­ing the fi­nal de­ci­sion of which sus­pect com­mit­ted the crime.

Within ev­ery group, only one mem­ber was given a good com­pos­ite (one that was clear enough to al­low for some con­fi­dence in sug­gest­ing a sus­pect). The study showed that those with less de­scrip­tive com­pos­ites trusted their part­ners more due to the con­fi­dence they pos­sessed. In most pairs, this led to the cor­rect se­lec­tion of the sus­pect. The bot­tom line: these tri­als sup­port ev­i­dence of the con­fi­dence heuris­tic, with the au­thors not­ing that, “con­fi­dence does sig­nal ac­cu­racy and does en­courage peo­ple to be­lieve what is said.”

It would be a log­i­cal ex­ten­sion, there­fore, to as­sume that peo­ple who con­fi­dently opine on poli­tics are sig­nal­ling their ex­per­tise and it would be wise to trust them. The re­searchers noted, how­ever, that con­text mat­ters. Their ex­per­i­ments were con­ducted within “com­mon-in­ter­est” tasks, sim­ple tasks in which the par­ti­ci­pants had a shared goal of cor­rectly iden­ti­fy­ing the sus­pect. The pop­u­lar ex­am­ple is a com­mu­nal pub quiz con­tain­ing a prize for the win­ning team. If one lacks con­fi­dence, their opinion is of­ten dis­re­garded. He­si­ta­tion is a nega­tive sig­nal, rep­re­sent­ing lack of knowl­edge on the sub­ject.

“Be­ware, how­ever,” one re­searcher notes, “of situ­a­tions where peo­ple’s in­ter­ests are mis­al­igned!” For ex­am­ple, if a com­pet­ing quiz teams con­fi­dently shouts out an an­swer, “you’d be fool­ish to blindly trust their judg­ment—no mat­ter how con­fi­dent it is.” The au­thors con­clude by warn­ing, “Hence, be­fore be­liev­ing their as­sertive state­ment that Ed Balls was the most re­cent win­ner of Strictly Come Danc­ing, you’d do well to con­sider the other per­son’s agenda.” (For any non-Brit read­ers, Ed Balls’ Gang­nam Style, while quite im­pres­sive, did not earn the win.)

Such is the case when peo­ple are dis­cussing poli­ti­cal mat­ters. Sadly, there is no al­ign­ment of in­ter­ests and/​or shared goals. As noted above, many care more about be­ing the win­ner than be­ing right – an im­por­tant dis­tinc­tion. There­fore, the con­fi­dence heuris­tic that hu­mans have re­lied on and trusted for mil­len­nia is nul­lified in such cir­cum­stances. In fact, as the au­thor wit­tily re­marks at the end of her com­men­tary, you should be ex­tremely wary of con­fi­dence when en­gaged with peo­ple who don’t share the same goal. They will of­ten cut cor­ners and try to win how­ever they can.


But how can peo­ple’s in­ter­ests be mis­al­igned? Doesn’t ev­ery­body just want to solve the is­sues?

Un­for­tu­nately, not. Peo­ple care much more about im­prov­ing so­cial stand­ing through per­cep­tion than reach­ing com­mon ground. This is one of the most dam­ag­ing norms in our so­cial psy­che. It’s so deeply em­bed­ded in de­bate, in fact, that few ever pause to con­sider it. Many en­joy the com­fort of be­ing told they are right, which leads them to check into the al­lur­ing echo cham­ber – where ev­ery­body knows your name and agrees with you on ev­ery­thing. We’ve all main­tained res­i­dency here at some point – whether we were aware of it or not.

Ed­u­ca­tion has con­di­tioned us – to a cer­tain ex­tent – to frame life in terms of right and wrong. In Amer­ica, the av­er­age cit­i­zen now re­ceives al­most 14 years of ed­u­ca­tional train­ing in their lives. By the time they en­ter adult­hood, neu­ral path­ways have formed which as­so­ci­ate be­ing right with pos­i­tive out­comes – schol­ar­ships, ad­mis­sions ac­cep­tances, etc. Which makes the case for mis­al­ign­ment quite clear.

Even be­yond the in­sti­tu­tional in­fluences that im­pact so­cietal per­cep­tions of right­ness, we have evolved with a base need to be right in times of ex­is­ten­tial threat. When we are on the defense, be­ing as­saulted or ac­cused of wrong ac­tion, con­fi­dence – and the need to be right – helps us sur­vive. Mel Schwartz , a psy­chother­a­pist, notes that “It quick­ens our pulse, causes us to shout … It is the rai­son d’etre for most acts of ha­tred, vi­o­lence, and war­fare.”

The need to be right is a visceral re­ac­tion to any threat. In an era of height­ened poli­ti­cal di­vi­sion and ten­sion, this need of­ten leads us astray from the true goal of in­tel­lec­tual hon­esty. Believe half of what you hear and none of what you think – in the end, we are all fal­lible hu­mans that trick our­selves more of­ten than we’d like to be­lieve.

Re­searchers have high­lighted ex­ten­sively our con­stant bout with cog­ni­tive dis­so­nance, which means that “we nat­u­rally look for ev­i­dence that val­i­dates what we already be­lieve, which in turn makes us stronger in our con­vic­tions.” This bias is ubiquitous and is ex­clu­sively un­no­ticed. When you search the web for an­swers to a ques­tion, you are more likely to se­lect links that provide the ini­tial guise of opinion-con­fir­ma­tion. Even in writ­ing this es­say, I have un­con­sciously catered my search terms to al­ign with ex­ist­ing views. (Not­ing this, I have gone back through all sec­tions and tried to find dis­rup­tive ev­i­dence to in­cor­po­rate into my writ­ing.)

Thus, it is not sur­pris­ing to see ac­tive so­cial me­dia users pep­per­ing com­ment sec­tions with bi­ased sources in an effort to por­tray con­fi­dence. In­stinc­tively, this is done in the hopes of be­ing re­garded as right and gain­ing what­ever so­cial cap­i­tal has been put into the pot. Th­ese fo­rums are ter­rible for fos­ter­ing hon­est and con­struc­tive dis­cus­sions. That pri­mal need kicks in when you think that ev­ery post or com­ment you pub­lish might be seen by your en­tire so­cial cir­cle – hun­dreds of peers with the abil­ity to judge you.

Dialogue needn’t be in­effi­cient, how­ever. Re­search shows con­flict can be ex­tremely con­struc­tive and bring two par­ties closer to­gether when the en­vi­ron­ment is suit­able for hon­est dis­cus­sion. The fol­low­ing are some fac­tors that con­tribute to mean­ingful de­bate and dis­cus­sion:

1. Both par­ties must be will­ing to speak and hear oth­ers in a re­spect­ful man­ner. Good faith dis­cus­sion has be­come an ar­cane relic of times past, the aba­cus of mod­ern poli­ti­cal dis­cus­sion.

2. Dis­cus­sions should also aim to pre­serve the un­der­ly­ing re­la­tion­ship. Coal­i­tions are built amongst like-minded in­di­vi­d­u­als and it is hard to build these strong con­nec­tions when poli­ti­cal dis­crep­an­cies threaten to end the re­la­tion­ship’s foun­da­tion.

3. At­tempts must be made to see the other’s per­spec­tive. If you fail to un­der­stand why peo­ple pro­cess fac­tual events differ­ently, then you will be un­able to un­der­stand the con­flict from any an­gle other than your own.

4. Dis­cus­sions must be ac­com­mo­dat­ing and seek ways to im­prove the be­havi­our of participants

5. The goal must be to­wards a bet­ter fu­ture state, ac­knowl­edg­ing this pro­gres­sive ideal is the un­der­pin­ning of all his­tor­i­cal de­vel­op­ment.

6. Par­ti­ci­pants need to ex­er­cise self-aware­ness, know­ing – as men­tioned above – that emo­tions surely im­pact their views and the solu­tions they might pro­pose.

Th­ese all con­tribute to an over-arch­ing theme: putting aside per­sonal pride and bias while try­ing to con­duct pro­duc­tive dis­cus­sions. Over­con­fi­dence does not bode well for healthy de­bate. Cer­tainty and open­ness to changes are in­versely cor­re­lated – as you ap­proach ab­solute cer­tainty, you can no longer ac­cept any sig­nifi­cant prob­a­bil­ity of be­ing wrong on an is­sue.

Since con­ver­sa­tions are geared to­wards be­ing right, the mis­al­ign­ment of in­ter­ests be­tween par­ties has in­creased ex­po­nen­tially. There­fore, when peo­ple ex­ude con­fi­dence in ev­ery po­si­tion, you should likely ques­tion the evolu­tion­ary sig­nal that we are bred to per­ceive: that one’s con­fi­dence is sig­nal­ling be­ing right. Peo­ple who ad­mit a mar­gin of er­ror and ad­dress their un­cer­tainty should be lauded and trusted more than those who ig­no­rantly press the at­tack.

There is a Catch-22 in which com­mon rhetoric states we should sec­ond-guess peo­ple who sec­ond-guess them­selves. But re­search has sup­ported the hy­poth­e­sis that in­ter­ro­gat­ing per­sonal views through devil’s ad­vo­ca­tion sup­ports an­a­lyt­i­cal rigor in de­vel­op­ing opinions. This all leads me to won­der: Did the shoe­maker re­ally care about Apel­les mak­ing the best paint­ing, or just about be­ing ad­mired by the pub­lic as a no­table art critic?

III.

There are peo­ple who get a pass when they pro­pose ideas with con­fi­dence: ex­perts. The sci­en­tific method helps de­velop log­i­cal con­clu­sions (and some­times pro­pos­als) sur­round­ing cer­tain is­sues. But the is­sue of over­con­fi­dence oc­curs in academia just the same. Some of the most thought­ful and ded­i­cated re­searchers ac­knowl­edge their cog­ni­tive bi­ases in con­duct­ing re­search­ing. P-hack­ing is a com­mon tech­nique to jus­tify in­con­clu­sive ev­i­dence. And the re­search sys­tem has shifted to fa­vor quan­tity over qual­ity, pre­sent­ing re­searchers with the daunt­ing threat of ‘pub­lish or per­ish.’

But, for all the sys­tem’s flaws (an en­tire es­say is in the works on this topic alone), re­searchers rely on ob­ser­va­tion and ev­i­dence when draw­ing con­clu­sions. Thus, we find it sig­nifi­cant in prov­ing a point to cite di­rect ob­ser­va­tion, i.e. “When pris­ons added one ad­di­tional staff mem­ber, the num­ber of vi­o­lent in­ci­dents de­creased by X%.” This is a state­ment you would likely come across in the sci­en­tific com­mu­nity. What you would never see is, “A prison guard stated that one more mem­ber would help re­duce vi­o­lent in­ci­dents by X%.” This is an ar­gu­men­ta­tive fal­lacy, fo­cus­ing on the per­son over the data – an ar­gu­ment from au­thor­ity.

Ar­gu­ments from au­thor­ity hap­pen when the opinion of an ex­pert on a topic is used as ev­i­dence to sup­port an ar­gu­ment. It’s a well-known log­i­cal fal­lacy, al­though it has been a di­vi­sive topic. For ex­am­ple, in In­tro­duc­tion to Logic, a com­mon in­tro­duc­tory text­book for uni­ver­sity fresh­men, Copi and Co­hen write:

“When we ar­gue that a given con­clu­sion is cor­rect on the ground that an ex­pert au­thor­ity has come to that judg­ment, we com­mit no fal­lacy. In­deed, such re­course to au­thor­ity is nec­es­sary for most of us on very many mat­ters. Of course, an ex­pert’s judg­ment con­sti­tutes no con­clu­sive proof; ex­perts dis­agree, and even in agree­ment they may err; but ex­pert opinion surely is one rea­son­able way to sup­port a con­clu­sion.”

And in Logic, an­other pop­u­lar course sup­ple­men­tal, Baronett says that:

“The ap­peal to ex­pert tes­ti­mony strength­ens the prob­a­bil­ity that the con­clu­sion is cor­rect, as long as the opinion falls within the realm of the ex­pert’s field.”

Th­ese ex­cerpts dis­re­gard the fal­la­cious ‘rea­son­ing’ that ex­plic­itly sup­ports cog­ni­tive bias – and this is what is com­monly taught to stu­dents.


In 1923, a pa­per was pub­lished in the Jour­nal of Ex­per­i­men­tal Zool­ogy by an ex­pert named Theophilius Pain­ter. Pain­ter de­clared in this re­port that hu­mans have 24 pairs of chro­mo­somes. This was a large de­vel­op­ment in the field and re­sulted in his re­ceiv­ing the 1934 Daniel Giraud Elliot Medal from the Na­tional Academy of Sciences.

Scien­tists prop­a­gated this fact with­out au­dit­ing Pain­ter’s poor data and con­flict­ing ob­ser­va­tions he had made in his ini­tial pa­per – it came to be held as fact. Over thirty years later, in 1956, re­searchers Joe Tijo and Albert Le­van pub­lished “The Chro­mo­some Num­ber of Man” in Hered­i­tas – a sci­en­tific jour­nal fo­cused on ge­net­ics. In the pa­per, us­ing more ad­vanced tech­niques, the au­thors looked at the chro­mo­somes in hu­man so­matic cells and found that the ac­tual num­ber was 23.

Yet for over three decades top-tier sci­en­tists cited the au­thor in­stead of the (faulty) data he pro­vided. Even text­books show­ing micro­scopic pic­tures with 23 iden­ti­fi­able pairs re­ported the num­ber as the oft-cited 24. Ex­perts fell prey to con­fir­ma­tion bias as “most cy­tol­o­gists, ex­pect­ing to de­tect Pain­ter’s num­ber, vir­tu­ally always did so.”

Pain­ter’s in­fluence was so great that sci­en­tists preferred to be­lieve his count over the ac­tual ev­i­dence, and re­searchers that ob­tained the ac­cu­rate num­ber (23), mod­ified or dis­carded their data to agree with Pain­ter. Carl Sa­gan was right when he co­gently pro­posed that “One of the great com­mand­ments of sci­ence is, ‘Mistrust ar­gu­ments from au­thor­ity.’ … Too many such ar­gu­ments have proved too painfully wrong. Author­i­ties must prove their con­tentions like ev­ery­body else.”

In 1989, Dr. Martin Fleischmann and Dr. Stan­ley Pons – elec­tro­chemists work­ing at the Univer­sity of Utah – an­nounced that they had found a way to cre­ate nu­clear fu­sion at room tem­per­a­ture. This phe­nomenon, called “Cold Fu­sion” would al­low gov­ern­ments to pro­duce im­mense nu­clear re­ac­tors with much less cap­i­tal and in­fras­truc­ture re­quire­ments. Moti Mizrahi of St. John’s Univer­sity in Queens, New York, pro­poses the fol­low­ing:

“Sup­pose, then, that, shortly af­ter their an­nounce­ment, a non-ex­pert puts for­ward the fol­low­ing ar­gu­ment from ex­pert opinion:

(1) Elec­tro­chemists Fleischmann and Pons say that nu­clear fu­sion can oc­cur at room tem­per­a­ture.

(2) There­fore, nu­clear fu­sion can oc­cur at room tem­per­a­ture.”

This line of think­ing, used of­ten enough, will lead mostly to false con­clu­sions. There is no sub­stan­tive rea­son­ing, only op­ti­mistic folly. There will be, how­ever, the odd oc­ca­sion where such logic pro­duces the cor­rect re­sult, but it will be the ex­cep­tion, not the norm.

In­deed, shortly af­ter the ini­tial pub­li­ca­tion, fel­low re­searchers could not achieve the same re­sult – it turns out that nu­clear fu­sion is not cur­rently pos­si­ble at room tem­per­a­ture. This short vi­gnette illus­trates that “the mere fact that two elec­tro­chemists say that nu­clear fu­sion can oc­cur at room tem­per­a­ture is not a par­tic­u­larly strong rea­son to ac­cept the claim that nu­clear fu­sion can oc­cur at room tem­per­a­ture.”

But isn’t that just one ex­am­ple? And am I not fal­ling prey to the same fal­lacy, cit­ing an au­thor­ity and limited data to im­ply em­piri­cal ev­i­dence? Good ob­ser­va­tion! Let’s dig fur­ther.

In 2005, Phillip Tet­lock – the cur­rent An­nen­berg Univer­sity Pro­fes­sor at the Univer­sity of Penn­syl­va­nia – con­ducted a long-term study an­a­lyz­ing nu­mer­ous poli­ti­cal pre­dic­tions made by ‘ex­perts’ (aca­demics, economists, policy mak­ers, etc.). His re­sults show that these ex­perts were only slightly (and ex­tremely in­signifi­cantly) more ac­cu­rate than chance (50%). This is to say that the ‘ex­perts’ may as well have been guess­ing. Or as Tet­lock him­self posits, most of the ex­perts he stud­ied did no bet­ter than “a dart-throw­ing chim­panzee.”

Re­search seems to con­firm this hy­poth­e­sis time and again. In 2010, David Freed­man pub­lished “Wrong: Why ex­perts* keep failing us – and how to know when not to trust them.” The find­ings con­firm Tet­lock’s view and provide some startling ob­ser­va­tions on ex­pert opinion:

1. Ap­prox­i­mately two-thirds of the find­ings pub­lished in top med­i­cal jour­nals are re­jected af­ter a few years;

2. There is a 1 in 12 chance that a physi­cian’s di­ag­no­sis will be wrong to the ex­tent that it could cause sig­nifi­cant harm to the pa­tient;

3. Most stud­ies pub­lished in eco­nomics jour­nals are re­jected af­ter a few years (i.e., the re­sults of the stud­ies are sub­se­quently con­sid­ered to be in­cor­rect);

4. Tax re­turns pre­pared by pro­fes­sion­als are more likely to con­tain er­rors than tax re­turns pre­pared by nonprofessionals

Other re­search finds that clini­cal psy­chol­o­gists perform no bet­ter (judged by out­come) than non-ex­perts. Pro­fes­sors Colin Camerer and Eric John­son also found that ex­pert de­ci­sions are of­ten no more ac­cu­rate than non-ex­pert de­ci­sions and are much less ac­cu­rate than us­ing an au­to­mated de­ci­sion pro­ce­dure. The body of ev­i­dence ar­gu­ing against ex­pert con­sen­sus goes on and on, but I’ll spare you the end­less bar­rage of one sen­tence sum­maries on their find­ings.

If ex­perts who spend their en­tire lives study­ing spe­cific dis­ci­plines only pos­sess a small edge of pre­dic­tive prowess over the lay­man, who should be cited in de­bate? This is a good ques­tion and one that not many have the an­swer to. One con­sid­er­a­tion is that there isn’t always a perfect solu­tion: a glass-slip­per, Goldilocks-just-right, one-size-fits-all silver bul­let. We are im­perfect be­ings to start; throw in ex­tremely com­plex is­sues ripe with emo­tional re­sponses and you can liter­ally see ob­jec­tivity wav­ing good­bye.

There­fore, it’s trou­bling to ob­serve peo­ple prophet­i­cally com­ment­ing on such is­sues ad nau­seum with the cer­tainty of Nostradamus. Ar­gu­ments should be based on ev­i­dence, not ex­pert sta­tus. And the re­al­ity is, most of the time the ideas pro­posed will be re­vis­ited and re­vised. This is good – it is a sign of progress. But we should at least un­der­stand the in­signifi­cance of our cer­tainty. Open­ing our­selves up to the pos­si­bil­ity of be­ing wrong can only as­sist in the de­vel­op­ment of bet­ter log­i­cal un­der­stand­ings and ar­gu­ments.

This does not mean, how­ever, that to have an in­formed opinion, one must dig through the minu­tiae of re­search pa­pers clut­tered with es­o­teric jar­gon. It sim­ply means ex­er­cis­ing aware­ness of our limited ca­pac­ity to fully un­der­stand com­plex is­sues.

IV.

Back to Greece once more. A group of in­tel­lec­tu­als in An­cient Greek so­ciety called “sophists” (from the root word sophia mean­ing “skil­led” or “wise”) were itin­er­ant in­tel­lec­tu­als who taught courses in var­i­ous sub­jects to less ed­u­cated peo­ple. They claimed that they could find the an­swer to all ques­tions. Notably, they pi­o­neered rhetoric to per­suade pupils of their solu­tions. His­tory as­cribes this un­scrupu­lous ex­ploita­tion of cus­tomers through the term “sophism,” which is “a [fal­la­cious] ar­gu­ment for dis­play­ing in­ge­nu­ity in rea­son­ing or for de­ceiv­ing some­one.”

Th­ese were the elites of Greek so­ciety, an in­tel­lec­tu­ally su­pe­rior breed. They had the an­swers to ques­tions that ranged from the arts to the sci­ences, ath­let­ics, and phys­iol­ogy, as well as poli­ti­cal quan­daries. Bi­ased his­tor­i­cal record­ing plagues the true na­ture of their teach­ings, as Plato’s works are the main source of record for Sophist writ­ings and thought – and Plato did not take kind to their brand of in­tel­lec­tual witchcraft.

The group of in­di­vi­d­u­als to which this es­say is geared to­wards is con­tem­po­rary sophists. We have de­vel­oped a new group of the bright­est peo­ple who ex­ert cer­tainty on al­most ev­ery is­sue. But is it not the least bit co­in­ci­den­tal that through­out his­tory, there has been one thread con­nect­ing many truly wise thinkers: in­tel­lec­tual mod­esty.

“Real knowl­edge is to know the ex­tent of one’s ig­no­rance.” – Con­fu­cius

“Ig­no­rance more fre­quently begets con­fi­dence than does knowl­edge.” – Charles Darwin

“Con­vic­tions are more dan­ger­ous en­e­mies of truth than lies.” – Friedrich Nietzsche

“One of the painful things about our time is that those who feel cer­tainty are stupid, and those with any imag­i­na­tion and un­der­stand­ing are filled with doubt and in­de­ci­sion.” – Ber­trand Russell

Mar­gin of er­ror is a mea­sure widely re­garded in statis­tics. If you pro­duce ex­per­i­men­tal re­sults that seem ground-break­ing, there is usu­ally a con­fi­dence in­ter­val in which you can ex­press these re­sults. This con­cept is a form of math­e­mat­i­cal ret­i­cence, ac­knowl­edg­ing that no anal­y­sis is truly em­piri­cal and com­pre­hen­sive.

Yet we hold ex­pec­ta­tions that poli­cy­mak­ers play god and ex­press un­re­lent­ing con­fi­dence – re­mem­ber the con­fi­dence heuris­tic? – that a solu­tion is iron-clad … no mar­gin for er­ror. And when, in­evitably, some of these solu­tions do not pan-out as planned, we shame the peo­ple in­stead of con­sid­er­ing what their thought pro­cess was.

Cul­tural norms do not re­ward oc­ca­sional ad­mis­sion of wrong­do­ing. Poli­ti­ci­ans that are con­sis­tently wrong but apol­o­gize should not be lauded as heroes per se, but we have now ac­cli­ma­tized to an en­vi­ron­ment in which ad­mit­ting any change and ac­cept­ing any re­spon­si­bil­ity are poli­ti­cal death marches. This ap­plies to so­cial in­ter­ac­tions about poli­tics as well – say the wrong thing once and do not bother try­ing to apol­o­gize; your bed is already made.


Use­ful de­bate seems to have gone the way of the Dodo bird; an ex­tinct relic. Rigidity of con­fi­dence con­fines the soul to a des­o­late life of blind­ness. How can ideas progress when cer­tainty sits front and cen­ter, un­will­ing to yield for any passersby? Ge­orge Bernard Shaw fa­mously re­marked that, “Progress is im­pos­si­ble with­out change, and those who can­not change their minds can­not change any­thing.”

De­bates should func­tion similarly to ne­go­ti­a­tions: mul­ti­ple par­ti­ci­pat­ing par­ties striv­ing for mu­tual benefit that of­ten ac­com­pa­nies com­pro­mise. The al­ter­na­tive is di­vi­sion and poli­ti­cal dead­lock – which we are amid the throes of. Cicero once re­marked that, “More is lost by in­de­ci­sion than wrong de­ci­sion. In­de­ci­sion is the thief of op­por­tu­nity. It will steal you blind.” Divi­sion and po­lariza­tion are the biggest con­trib­u­tors to stag­nant policy that rob so­ciety of po­ten­tial progress.

To shift away from in­de­ci­sion, charis­matic par­ties – from the grass­root voices scat­tered through­out on­line fo­rums to poli­ti­ci­ans them­selves – must shift from sophist to sage. From blind cer­tainty and elitism to mar­gin of er­ror and hu­man­ity.

In Plato’s Apol­ogy, Pythia – the or­a­cle of Delphi – claims that Socrates is the wis­est man in the world. “I seem then,” Socrates replied, “in just this lit­tle thing to be wiser than this man at any rate: that what I do not know I do not think I know ei­ther.” Ele­gance in such wis­dom is sel­dom to be found. It may do well to re­in­state the mod­est wis­dom of the past and not ride the drunken buzz of knowl­edge.

As Alexan­der Pope wrote, apro­pos for to­day:

“A lit­tle learn­ing is a dan­ger­ous thing; Drink deep, or taste not the Pie­rian spring:

There shal­low draughts in­tox­i­cate the brain, And drink­ing largely sobers us again.

Fired at first sight with what the Muse im­parts, In fear­less youth we tempt the heights of Arts;

While from the bounded level of our mind; Short views we take, nor see the lengths be­hind.”

V.

On July 6th, 1974, Gar­ri­son Keillor of the Min­nesota Public Ra­dio (MPR) launched a new seg­ment of the “A Prairie Home Com­pan­ion” pro­gram. The new broad­cast, “News from Lake Wobe­gon,” be­came a pop­u­lar weekly monologue that Keillor er­ro­neously claimed was about sto­ries from his (fic­ti­tious) home­town, Lake Wobe­gon. The set­ting al­lowed for satiri­cal and heart­break­ing tales which cap­ti­vated listen­ers.

The pro­gram’s legacy, how­ever, is from the as­signed trait of Keillor’s fel­low towns­peo­ple. De­scribing it in the inau­gu­ral broad­cast, he calls Lake Wobe­gon, “the lit­tle town that time for­got and the decades can­not im­prove … where all the women are strong, all the men are good-look­ing, and all the chil­dren are above av­er­age.”

Over a decade later, in 1991, so­cial psy­chol­o­gists Van Yperen and Bu­unk would pub­lish ground­break­ing re­search on “illu­sory su­pe­ri­or­ity.” The re­searchers started notic­ing that peo­ple were de­vel­op­ing pos­i­tive illu­sions per­tain­ing to their in­tel­lect. Em­piri­cally, in­di­vi­d­u­als would over­es­ti­mate their qual­ities and abil­ities when stud­ies asked and then mea­sured them in prac­tice. The phe­nomenon in bet­ter known in pop­u­lar cul­ture as the “Lake Wobe­gon Effect,” for the apoc­ryphal town where ev­ery­one was above av­er­age: they were smarter, sex­ier, fun­nier … ev­ery­thing.

While most of the re­search driv­ing this field is fo­cused on Amer­i­cans, some stud­ies show that this may not be the case in other cul­tures. In 2007, re­searchers from the Univer­sity of Bri­tish Columbia pub­lished a pa­per ti­tled, “In Search of East Asian Self-En­hance­ment.” The study con­cluded that “Within cul­tures, Western­ers showed a clear self-serv­ing bias, whereas East Asi­ans did not, with Asian Amer­i­cans fal­ling in be­tween.”

The au­thors con­firmed their hy­poth­e­sis that cul­tural differ­ences, in a great ma­jor­ity of cases, con­tribute to Western cul­tures over­es­ti­mat­ing their abil­ity. Every­one be­lieves they are bet­ter than av­er­age, and there­fore, as­cribes to the be­lief that the av­er­age is much lower than it re­ally is.

Such was the case for McArthur Wheeler, who grossly un­der­es­ti­mated his ba­sic knowl­edge of chem­istry. On April 19, 1995, Wheeler robbed two banks near his home­town in Pitts­burgh. The rob­beries went as planned and he al­most got away with it. Ex­cept, he over­looked one large fac­tor: his in­tel­li­gence.

Wheeler thought he was a pretty smart guy and had seen how lemon juice could be used to make in­visi­ble ink. So, in prepa­ra­tion for the in­creased se­cu­rity mea­sures banks op­er­ate with, he ap­plied a coat­ing of lemon juice to his face, be­liev­ing it would make his face not ap­pear on the se­cu­rity cam­eras. As po­lice later ap­pre­hended Wheeler, he sighed in dis­be­lief, “But I wore the juice,” he said. About 80% of you might think this is a lie, but it is not. It is cold hard fact, yet an­other (ex­treme) ex­am­ple of how bad we are in as­sess­ing our own abil­ities.

The fight for in­tel­lec­tual hon­esty is a con­stant up­hill sprint. We are not wired to re­main ob­jec­tive and ex­er­cise hu­mil­ity; we are meant to prove our right­ness at all costs. We cer­tainly are not liv­ing in a so­ciety that truly ac­cepts ac­knowl­edge­ment of wrong­do­ing. In tribal set­tings, re­sult trumps in­tent, and ad­mit­ting fault will only pin tar­gets to your back. To­day, peo­ple still can­not get away with ut­ter­ing “Oops,” which leads us to sus­tain ego­cen­tric­ity and as­sume that we have all the right an­swers and are bet­ter than ev­ery­one else: wel­come to Lake Wobe­gon.

VI.

There may only be a sin­gle word in our lan­guage that trig­gers the thought of one spe­cific per­son: ge­nius. The con­nec­tion to that his­tor­i­cal figure is so strong in fact that many of you know who it is in your head right now, with­out me hav­ing men­tioned his name. Ein­stein is a zeit­geist syn­onym for the term ge­nius, a man who saw the world at a differ­ent level of com­pre­hen­sion.

Ein­stein, the ge­nius who re­defined how we un­der­stand the uni­verse, was a com­pli­cated man. His life was filled with stark con­trasts. For ex­am­ple, grow­ing up, Ein­stein always challenged au­thor­ity and dog­matic think­ing – this was the prin­ci­pal rea­son for his leav­ing Ger­many to study el­se­where as a teenager. This re­bel­lious na­ture was self-cited as the chief rea­son for dis­cov­er­ing such ground­break­ing phe­nom­ena later in his life. What many fail to ac­knowl­edge about Ein­stein, how­ever, was that, just like ev­ery­body else, he was a reg­u­lar hu­man be­ing with all the ac­com­pa­ny­ing flaws.

As a young adult, Ein­stein drew in­spira­tion from op­pos­ing the sta­tus quo and au­thor­ity in all facets of life. Later, he be­came the un­re­lent­ing au­thor­ity whom oth­ers fre­quently challenged.

After Ein­stein’s mir­a­cle year in 1905, where he had pub­lished four ground­break­ing pa­pers while work­ing as a Swiss patent clerk, the physics au­thor­i­ties re­jected such ex­treme views. (Ein­stein was still largely un­known at the time and only in re­vis­ing his­tory has it be­come known as a “mir­a­cle year.”) Ein­stein held dis­dain for the es­tab­lished physi­cists who be­lieved that their be­liefs were fact and Ein­stein’s ideas were rad­i­cal and in­ac­cu­rate. In time, Ein­stein’s ideas were proven, de­liv­er­ing him to su­per­star­dom – the likes of which no sci­en­tist (and many world lead­ers) had ever seen be­fore.

But alas, the field some of Ein­stein’s early the­o­ries had helped de­velop – quan­tum me­chan­ics – would turn into his kryp­tonite. When Ein­stein moved to Amer­ica and started work­ing for the In­sti­tute for Ad­vanced Stud­ies in Prince­ton, New Jersey, young Euro­pean physi­cists were gain­ing more in­sight into the quan­tum realm. Niels Bohr, Werner Heisen­berg, and Er­win Schröd­inger cham­pi­oned the Copen­hagen in­ter­pre­ta­tion of quan­tum physics, which Ein­stein flatly re­jected. This in­ter­pre­ta­tion sub­mit­ted that the quan­tum world didn’t as­cribe to the strict causal­ity of New­to­nian physics. Rather, that there were only prob­a­bil­is­tic ex­pla­na­tions.

Ein­stein’s en­tire life re­volved around causal­ity in the uni­verse – that is what made it ap­pear beau­tifully com­plex to him. Ac­cord­ingly, Ein­stein did not take kindly to this in­ter­pre­ta­tion of quan­tum physics, fa­mously re­mark­ing that, “God doesn’t play dice with the uni­verse.” (“Ein­stein,” Bohr re­sponded, “stop tel­ling God what to do.”)

This di­ver­gence cre­ated a rift in the sci­en­tific com­mu­nity, and Ein­stein, who’s lifelong be­lief was in the in­finite causal­ity of the uni­verse, would try and poke holes in the the­ory. Through­out the decades, young physi­cists would prop­erly re­buke these claims. But Ein­stein never, even when he died, truly ac­cepted mod­ern quan­tum the­o­ries. He ac­knowl­edged on many oc­ca­sions that there were per­sonal fac­tors that forced him time and again to cut against the grain. “To pun­ish me for my con­tempt of au­thor­ity,” Ein­stein joked, “Fate has made me an au­thor­ity my­self.”

How could such a brilli­ant in­di­vi­d­ual let per­sonal dogma and rigid be­lief (over­con­fi­dence, in fact), su­per­sede ra­tio­nal dis­cus­sion? If it could hap­pen to Ein­stein, it hap­pens with ev­ery sin­gle per­son…


There is a nice con­nec­tion be­tween this story and the topic of this es­say. It was not picked at ran­dom to tell the reader, “Ein­stein was fal­lible; there­fore, we all are.” It hap­pens to deal quite di­rectly with the main fo­cus of our cer­tainty and over­con­fi­dence: poli­tics.

The part of quan­tum me­chan­ics that Ein­stein failed to ac­cept was that in the quan­tum (in­finitely small) realm, mat­ter be­haves as both par­ti­cle and wave – two very dis­tinct ob­jects on typ­i­cal scales. It is im­pos­si­ble to iden­tify the state of such mat­ter with­out mea­sur­ing it. This is the clas­sic ex­am­ple illus­trated by Schrod­inger’s cat. Un­til the box is opened, the cat is in a su­per­po­si­tion of be­ing ei­ther dead or al­ive. Its ac­tual state can­not be known with­out ob­ser­va­tion; only a prob­a­bil­ity can be as­signed.

Such holds true for poli­tics. When there is ar­gu­ment of policy, there is not nec­es­sar­ily a right an­swer – just as there is no ob­jec­tive an­swer to the ques­tion, “Is the cat dead or al­ive?” The only thing we know is that there are some prob­a­bil­ities of it be­ing a good or bad policy based on un­bi­ased anal­y­sis and crit­i­cal thought (and even then, there is no ob­jec­tive defi­ni­tion of such things).

Similar to quan­tum me­chan­ics, we can only de­ter­mine the state (or the effec­tive­ness) of a policy in hind­sight, when there has been some im­ple­men­ta­tion and am­ple data to ob­serve the im­pacts. As we know, no one has the hind­sight to pre­dict these out­comes, so we try our best to use ra­tio­nal thought and hold views or put in place policy that has the best chance; our best can­di­dates for suc­cess. But then, we re­cal­ibrate our for­mu­las and try again, piece­meal, un­til we get to a bet­ter place.

This, I posit, is the only ob­jec­tive truth in the world of policy and opinion. That there is noth­ing close to cer­tainty in the daily ar­gu­ments we have. That we are rid­dled with hu­man er­ror con­stantly be­tray­ing the bet­ter an­gels of our na­ture. Ac­cord­ingly, you might have recorded an ex­ces­sive use of the fol­low­ing terms in this es­say: maybe, might, could, etc. Us­ing these words is an ac­knowl­edg­ment of un­cer­tainty, a way to still con­tribute with­out the detri­men­tal ego­tism that bleeds through as charisma these days.

Many be­lieve that William But­ler Yeats’ “The Se­cond Com­ing” was about an im­pend­ing apoc­a­lyp­tic rev­olu­tion. The words make the reader feel dizzy, un­com­fortable, as if some om­i­nous pres­ence were wait­ing to pounce. Yeats had thought that the world was head­ing for hell, about to be taken over and ram­paged with car­nage. In such solemn times of worry, he makes a great ob­ser­va­tion. “The best,” he writes, “lack all con­vic­tion, while the worst are full of pas­sion­ate in­ten­sity.”

Maybe the Se­cond Com­ing is at hand right now. Peo­ple are ar­gu­ing for sport, to be right, rather than to reach com­mon ground and progress. We judge peo­ple for their worst and look to diminish our fel­low man. Good-hearted peers are silenced into com­plic­ity on all sides of the poli­ti­cal spec­trum, un­able to voice opinions or ask gen­uine ques­tions. Naïve con­vic­tion has set us on a path for demise. Many think that the Amer­i­can Ex­per­i­ment is breath­ing its dy­ing breath.

“What rough beast, its hour come round at last, slouches to­wards Beth­le­hem to be born?” won­dered Yeats. Maybe the an­swer is naïve con­vic­tion.