Biases of Intuitive and Logical Thinkers

Any in­tu­ition-dom­i­nant thinker who’s strug­gled with math prob­lems or logic-dom­i­nant thinker who’s strug­gled with small-talk knows how difficult and hope­less the ex­pe­rience feels like. For a long time I was an in­tu­ition thinker, then I de­vel­oped a log­i­cal think­ing style and soon it ended up dom­i­nat­ing—grant­ing me the lux­ury of ex­pe­rienc­ing both kinds of strug­gles. I even­tu­ally learned to ap­ply the think­ing style bet­ter op­ti­mized for the prob­lem I was fac­ing. Look­ing back, I re­al­ized why I kept stick­ing to one ex­treme.

I hy­poth­e­size that one-sided thinkers de­velop bi­ases and ten­den­cies that pre­vent them from im­prov­ing their weaker mode of think­ing. Th­ese bi­ases cause a pos­i­tive feed­back loop that fur­ther skews think­ing styles in the same di­rec­tion.

The rea­sons why one style might be overde­vel­oped and the other un­der­de­vel­oped vary greatly. Genes have a strong in­fluence, but en­vi­ron­ment also plays a large part. A teacher may have in­spired you to love learn­ing sci­ence at a young age, caus­ing you to foster to a think­ing style bet­ter for learn­ing sci­ence. Or maybe you grew up very phys­i­cally at­trac­tive and found so­cial­iz­ing with your peers a lot more re­ward­ing than study­ing af­ter school, caus­ing you to foster a think­ing style bet­ter for nav­i­gat­ing so­cial situ­a­tions. En­vi­ron­ment can be changed to help de­velop cer­tain think­ing styles, but it should be sup­ple­men­tary to ex­pos­ing and un­der­stand­ing the bi­ases you already have. En­ter­ing an en­vi­ron­ment that pe­nal­izes your think­ing style can be un­com­fortable, stress­ful and frus­trat­ing with­out be­ing pre­pared. (Such a painful ex­pe­rience is part of why these bi­ases cause a pos­i­tive feed­back loop, by mak­ing us avoid en­vi­ron­ments that re­quire the op­po­site think­ing style.)

De­spite ge­netic pre­dis­po­si­tion and en­vi­ron­men­tal cir­cum­stances, there’s room for im­prove­ment and ex­pos­ing these bi­ases and learn­ing to ac­count for them is a great first step.

Below is a list of a few bi­ases that worsen our abil­ity to solve a cer­tain class of prob­lems and keep us from im­prov­ing our un­der­de­vel­oped think­ing style.

In­tu­ition-dom­i­nant Biases

Over­look­ing cru­cial details

De­tails mat­ter in or­der to un­der­stand tech­ni­cal con­cepts. Over­look­ing a word or sen­tence struc­ture can cause com­plete mi­s­un­der­stand­ing—a com­mon blun­der for in­tu­ition thinkers.

In­tu­ition is re­ally good at mak­ing fairly ac­cu­rate pre­dic­tions with­out com­plete in­for­ma­tion, en­abling us to nav­i­gate the world with­out hav­ing a deep un­der­stand­ing of it. As a re­sult, in­tu­ition trains us to ex­pe­rience the feel­ing we un­der­stand some­thing with­out ex­am­in­ing ev­ery de­tail. In most situ­a­tions, pay­ing close at­ten­tion to de­tail is un­nec­es­sary and some­times dan­ger­ous. When learn­ing a tech­ni­cal con­cept, ev­ery de­tail mat­ters and the pre­ma­ture feel­ing of un­der­stand­ing stops us from ex­am­in­ing them.

This bias is one that’s more likely to go away once you re­al­ize it’s there. You of­ten don’t know what de­tails you’re miss­ing af­ter you’ve missed them, so merely re­mem­ber­ing that you tend to miss im­por­tant de­tails should prompt you to take closer ex­am­i­na­tions in the fu­ture.

Ex­pect­ing solu­tions to sound a cer­tain way

The In­tern­ship has a great ex­am­ple of this bias (and a few oth­ers) in ac­tion. The movie is about two mid­dle-aged un­em­ployed sales­men (in­tu­ition thinkers) try­ing to land an in­tern­ship with Google. Part of Google’s se­lec­tion pro­cess has the two men par­ti­ci­pate in sev­eral tech­ni­cal challenges. One challenge re­quired the men and their team to find a soft­ware bug. In a flash of in­sight, Vince Vaughn’s char­ac­ter, Billy, shouts “Maybe the an­swer is in the ques­tion! Maybe it has some­thing to do with the word bug. A fly!” After en­thu­si­as­ti­cally mak­ing sev­eral more word as­so­ci­a­tions, he turns to his team and in­sists they take him se­ri­ously.

Why is it be­liev­able to the au­di­ence that Billy can be so con­fi­dent about his an­swer?

Billy’s in­tu­ition made an as­so­ci­a­tion be­tween the challenge ques­tion and rid­dle-like ques­tions he’s heard in the past. When Billy used his in­tu­ition to find a solu­tion, his con­fi­dence in a rid­dle-like an­swer grew. In­tu­ition reck­lessly uses ir­rele­vant as­so­ci­a­tions as rea­sons for nar­row­ing down the space of pos­si­ble solu­tions to tech­ni­cal prob­lems. When as­so­ci­a­tions pop in your mind, it’s a good idea to le­gi­t­imize those as­so­ci­a­tions with sup­port­ing rea­sons.

Not rec­og­niz­ing pre­cise lan­guage

In­tu­ition thinkers are multi-chan­nel learn­ers—all senses, thoughts and emo­tions are used to con­struct a com­plex database of clus­tered knowl­edge to pre­dict and un­der­stand the world. With ro­bust in­for­ma­tion-ex­tract­ing abil­ity, cor­rect gram­mar/​word-us­age is, more of­ten than not, un­nec­es­sary for mean­ingful com­mu­ni­ca­tion.

Com­mu­ni­cat­ing tech­ni­cal con­cepts in a mean­ingful way re­quires pre­cise lan­guage. Con­no­ta­tion and sub­text are stripped away so words and phrases can purely rep­re­sent mean­ingful con­cepts in­side a log­i­cal frame­work. In­tu­ition thinkers com­mu­ni­cate with im­pre­cise lan­guage, gath­er­ing mean­ing from con­text to com­pen­sate. This makes it hard for them to rec­og­nize when to turn off their pow­er­ful in­for­ma­tion ex­trac­tors.

This bias ex­plains part of why so many in­tu­ition thinkers dread math “word prob­lems”. In­tro­duc­ing words and phrases rich with mean­ing and con­no­ta­tion sends their in­tu­ition run­ning wild. It’s hard for them to find cor­re­spon­dences be­tween words in the prob­lem and vari­ables in the the­o­rems and for­mu­las they’ve learned.

The noise in­tu­ition brings makes it hard to think clearly. It’s hard for in­tu­ition thinkers to tell whether their au­to­matic as­so­ci­a­tions should be taken se­ri­ously. Without a re­li­able way to dis­cern, wrong in­ter­pre­ta­tions of words go un­de­tected. For ex­am­ple, with­out any physics back­ground, an in­tu­ition thinker may read the state­ment “Mat­ter can have both wave and par­ti­cle prop­er­ties at once” and be­lieve they com­pletely un­der­stand it. Un­re­lated as­so­ci­a­tions of what mat­ter, wave and par­ti­cle mean, blindly take prece­dence over tech­ni­cal defi­ni­tions.

The slight­est un­cer­tainty about what a sen­tence means should raise a red flag. Go­ing back and find­ing cor­re­spon­dence be­tween each word and how it fits into a tech­ni­cal frame­work will elimi­nate any un­cer­tainty.

Believ­ing their level of un­der­stand­ing is deeper than what it is

In­tu­ition works on an un­con­scious level, mak­ing in­tu­ition thinkers un­aware of how they know what they know. Not sur­pris­ingly, their best tool to learn what it means to un­der­stand is in­tu­ition. The con­cept “un­der­stand­ing” is a col­lec­tion of as­so­ci­a­tions from ex­pe­rience. You may have learned that part of un­der­stand­ing some­thing means be­ing able to an­swer ques­tions on a test with mem­o­rized fac­toids, or know­ing what to say to con­vince peo­ple you un­der­stand, or just know­ing more facts than your friends. Th­ese are not good meth­ods for gain­ing a deep un­der­stand­ing of tech­ni­cal con­cepts.

When in­tu­ition thinkers op­ti­mize for un­der­stand­ing, they’re re­ally op­ti­miz­ing for a fuzzy idea of what they think un­der­stand­ing means. This of­ten leaves them be­liev­ing they un­der­stand a con­cept when all they’ve done is mem­o­rize some dis­con­nected facts. Not know­ing what it feels like to have deeper un­der­stand­ing, they be­come con­di­tioned to always ex­pect some amount of sur­prise. They can feel max un­der­stand­ing with less con­fi­dence than log­i­cal thinkers when they feel max un­der­stand­ing. This lower con­fi­dence dis­in­cen­tivizes in­tu­ition thinkers to in­vest in learn­ing tech­ni­cal con­cepts, fur­ther keep­ing their log­i­cal think­ing style un­der­de­vel­oped.

One way I over­came this ten­dency was to con­stantly ask my­self “why” ques­tions, like a cu­ri­ous child both­er­ing their par­ents. The tech­nique helped me un­cover what used to be un­known un­knowns that made me feel over­con­fi­dent in my un­der­stand­ing.

Logic-dom­i­nant Biases

Ig­nor­ing in­for­ma­tion they can­not im­me­di­ately fit into a frame­work

Log­i­cal thinkers have and use in­tu­ition—prob­lem is they don’t feed it enough. They tend to ig­nore valuable in­tu­ition-build­ing in­for­ma­tion if it doesn’t im­me­di­ately fit into a pre­dic­tive model they deeply un­der­stand. While in­tu­ition thinkers don’t filter out enough noise, log­i­cal thinkers filter too much.

For ex­am­ple, if a log­i­cal thinker doesn’t have a good frame­work for un­der­stand­ing hu­man be­hav­ior, they’re more likely to ig­nore vi­sual in­put like body lan­guage and fash­ion, or au­di­tory in­put like tone of voice and in­to­na­tion. Hu­man be­hav­ior is com­pli­cated, there’s no frame­work to date that can make perfectly ac­cu­rate pre­dic­tions about it. In­tu­ition can build pow­er­ful mod­els de­spite work­ing with many con­found­ing vari­ables.

Bayesian prob­a­bil­ity en­ables log­i­cal thinkers to build pre­dic­tive mod­els from noisy data with­out hav­ing to use in­tu­ition. But even then, the first step of mak­ing a Bayesian up­date is data col­lec­tion.

Com­bat­ting this ten­dency re­quires you to pay at­ten­tion to in­put you nor­mally ig­nore. Sup­ple­ment your broader at­ten­tional scope with a re­searched frame­work as a guide. Say you want to learn how sto­ry­tel­ling works. Start by grab­bing re­sources that teach sto­ry­tel­ling and learn the ba­sics. Out in the real-world, pay close at­ten­tion to sights, sounds, and feel­ings when some­one starts tel­ling a story and try iden­ti­fy­ing sen­sory in­put to the sto­ry­tel­ling el­e­ments you’ve learned about. Once the ba­sics are sub­con­sciously picked up by habit, your con­scious at­ten­tion will be freed up to make new and more sub­tle ob­ser­va­tions.

Ig­nor­ing their emo­tion­s

E­mo­tional in­put is difficult to fac­tor, es­pe­cially be­cause you’re emo­tional at the time. Log­i­cal thinkers are no­to­ri­ous for ig­nor­ing this kind of messy data, con­se­quently starv­ing their in­tu­ition of emo­tional data. Be­ing able to “go with your gut feel­ings” is a ma­jor func­tion of in­tu­ition that log­i­cal thinkers tend to miss out on.

Your gut can pre­dict if you’ll get along long-term with a new SO, or what kind of out­fit would give you more con­fi­dence in your work­place, or if learn­ing ten­nis in your free time will make you hap­pier, or whether you pre­fer eat­ing a cheese­burger over tacos for lunch. Log­i­cal thinkers don’t have enough data col­lected about their emo­tions to know what trig­gers them. They tend to get bogged down and mis­lead with ob­jec­tive, yet triv­ial de­tails they man­age to fac­tor out. A weak un­der­stand­ing of their own emo­tions also leads to a weaker un­der­stand­ing of other’s emo­tions. You can be­come a bet­ter em­pathizer by bet­ter un­der­stand­ing your­self.

You could start from scratch and build your own frame­work, but self-as­sess­ment bi­ases will im­pede pro­duc­tivity. Learn­ing an ex­ist­ing frame­work is a more re­al­is­tic solu­tion. You can find re­sources with some light googling and I’m sure CFAR teaches some good ones too. You can im­prove your gut feel­ings too. One way is mak­ing sure you’re always con­sciously aware of the cir­cum­stances you’re in when ex­pe­rienc­ing an emo­tion.

Mak­ing rules too stric­t

Log­i­cal thinkers build frame­works in or­der to un­der­stand things. When adding a new rule to a frame­work, there’s mo­ti­va­tion to make the rule strict. The stric­ter the rule, the more pre­dic­tive power, the bet­ter the frame­work. When the do­main you’re try­ing to un­der­stand has mul­ti­vari­able chaotic phe­nom­ena, strict rules are likely to break. The re­sult is some­thing like the cur­rent state of macroe­co­nomics: a bunch of log­i­cal thinkers pre­oc­cu­pied by el­e­gant mod­els and the­o­ries that can only ex­ist when use­less in prac­tice.

Fol­low­ing rules that are too strict can have bad con­se­quences. Imag­ine John the sales­per­son is learn­ing how to make bet­ter first im­pres­sions and has built a rough frame­work so far. John has a rule that smil­ing always helps make peo­ple feel wel­comed the first time they meet him. One day he makes a busi­ness trip to Rus­sia to meet with a prospec­tive client. The mo­ment he meet his rus­sian client, he flashes a big smile and con­tinues to smile de­spite nega­tive re­ac­tions. After a few hours of talk­ing, his client re­veals she felt he wasn’t trust­wor­thy at first and al­most called off the meet­ing. Turns out that in Rus­sia smil­ing to strangers is a sign of in­sincer­ity. John’s strict rule didn’t ac­count for cul­tural differ­ences, blind­sid­ing him from up­dat­ing on his clients re­ac­tion, putting him in a risky situ­a­tion.

The de­sire to hold onto strict rules can make log­i­cal thinkers sus­cep­ti­ble to con­fir­ma­tion bias too. If John made an ex­cep­tion to his smil­ing rule, he’d feel less con­fi­dent about his knowl­edge of mak­ing first im­pres­sions, sub­se­quently mak­ing him feel bad. He may also have to amend some other rule that re­lates to the smil­ing rule, which would fur­ther hurt his frame­work and his feel­ings.

When feel­ing the urge to add on a new rule, take note of cir­cum­stances in which the ev­i­dence for the rule was found in. Add ex­cep­tions that limit the rule’s pre­dic­tive power to similar cir­cum­stances. Another op­tion is to en­ter­tain mul­ti­ple con­flict­ing rules si­mul­ta­neously, shift­ing weight from one to the other af­ter gath­er­ing more ev­i­dence.

Any­one have more bi­ases/​ten­den­cies to add?