# Intuition should be applied at the lowest possible level

Ear­lier to­day I lost a match at Pris­mata, a turn-based strat­egy game with­out RNG. When I an­a­lyzed the game, I dis­cov­ered that chang­ing one par­tic­u­lar de­ci­sion I had made on one turn from A to B caused me to win com­fortably. A and B had seemed very close to me at the time, and even af­ter know­ing for a fact that B was far su­pe­rior, it wasn’t in­tu­itive why.

Then I listed the main re­sults from A and B, val­ued those by in­tu­ition, and im­me­di­ately B looked way bet­ter.

One can model these prob­lems on a bunch of differ­ent lev­els, where go­ing from level n to n+1 means hid­ing the de­tails of level n and ap­prox­i­mat­ing their re­sults in a cruder way. On level 1, one would com­pare the two sub­trees whose roots are de­ci­sions A and B (this should work just like in chess). Level 2 would be look­ing at ex­act re­source and at­tack num­bers in sub­se­quent turns. Level 3 would be cat­e­go­riz­ing the main differ­ences of A and B and giv­ing them in­tu­itive val­ues, and level 4 de­cid­ing be­tween A and B di­rectly. What my mis­take show­cases is that, even in a con­text where I am quite skil­led and which has limited com­plex­ity, ap­ply­ing in­tu­ition at level 4 in­stead of 3 lead to a catas­trophic er­ror.

If you can’t go lower, fine. But there are countless cases of peo­ple us­ing in­tu­ition on a level that’s un­nec­es­sar­ily high. Hence if it’s worth do­ing, it’s worth do­ing with made-up num­bers. That is just one ex­am­ple of where ap­ply­ing in­tu­ition one level fur­ther down: “what quan­tity of dam­age arises from this” rather than “how bad is it” can make a big differ­ence. On ques­tions of medium im­por­tance, briefly ask­ing your­self “is there any point where I ap­ply in­tu­ition on a level that’s higher than nec­es­sary” seems like a wor­thy ex­er­cise.

Meta: I write this in the spirit of valu­ing ob­vi­ous ad­vice, and the sus­pi­cion that this er­ror is still made fairly of­ten.

No nominations.
No reviews.
• I think this is a re­ally in­ter­est­ing fram­ing. I like to do some­thing that seems re­lated but slightly differ­ent. Where I see what you’re de­scribing as some­thing like “ex­plic­itly (sys­tem2) take some­thing down one level (po­ten­tially into smaller pieces), and ap­ply in­tu­ition (sys­tem 1) to each of the pieces”, I like to do “ex­plic­itly (sys­tem 2) con­sider the prob­lem from a num­ber of differ­ent an­gles /​ the­o­ries, and try ap­ply­ing in­tu­ition (sys­tem 1) to each an­gle, and see whether the re­sults agree or how they differ.”

• To give an ex­am­ple, be­cause I think I’m be­ing too ab­stract: If I am think­ing of mak­ing an in­vest­ment de­ci­sion, I won’t just query my in­tu­ition “is this a good in­vest­ment?” be­cause it doesn’t nec­es­sar­ily have use­ful things to say about that. In­stead I will query it “how does this seem to com­pare to an equity in­dex fund”, and “what does an ad­e­quacy anal­y­sis say about whether there could plau­si­bly be free money here”, and “how does this pat­tern-match against scams I’m fa­mil­iar with”, and “what does the Out­side View say hap­pens to peo­ple who make this type of in­vest­ment”, and “what does Mur­phyjitsu pre­dict I will re­gret if I in­vest thusly?” This seems similar to your de­scribed ap­proach, if not quite the same.

• That looks like a more gen­eral ap­proach to me, where go­ing one level deeper could be one of the an­gles con­sid­ered, and ap­peal­ing to the out­side view an­other.

• This looks like an effect of com­pu­ta­tional costs, not a strate­gic mis­take. List­ing the re­sults of two de­ci­sions costs time /​ cog­ni­tive effort (i.e., com­pu­ta­tion); ap­ply­ing a heuris­tic (in­tu­itively com­pare ac­tion A to ac­tion B) is com­pu­ta­tion­ally cheaper, but—as you dis­cov­ered—more er­ror-prone.

Thus, though you chide peo­ple for “us­ing in­tu­ition on a level that’s un­nec­es­sar­ily high” [em­pha­sis mine], in fact ap­ply­ing in­tu­ition (i.e. heuris­tics) on a higher level may be quite nec­es­sary, for bound­ed­ness-of-ra­tio­nal­ity /​ com­pu­ta­tional-cost rea­sons.

• That’s why I said it should be used “on ques­tions of medium im­por­tance”. For small re­cur­ring de­ci­sions, the com­pu­ta­tional cost could be too high, and for life-chang­ing de­ci­sions, one would hope­fully have cov­ered this ground already (al­though on re­flec­tion, there are prob­a­bly counter-ar­gu­ments here, too). But for ev­ery­thing that we ex­plic­itly spend some time on any­way, not both­er­ing to list con­se­quences seems like a strate­gic mis­take to me. Even in the ex­am­ple I used with only 45 sec­onds available for each turn, I had enough time to do this. And I did spend some time on this de­ci­sion, I just used it to dou­ble and triple check with my in­tu­ition, rather than go­ing lower.

• I re­flex­ively tried to re­verse the ad­vice, and found it sur­pris­ingly hard to think of situ­a­tions where ap­ply­ing higher level in­tu­ition would be bet­ter.

There’s an ex­cerpt by chess GM Michael Tal:

We reached a very com­pli­cated po­si­tion where I was in­tend­ing to sac­ri­fice a knight. The sac­ri­fice was not ob­vi­ous; there was a large num­ber of pos­si­ble vari­a­tions; but when I be­gan to study hard and work through them, I found to my hor­ror that noth­ing would come of it. Ideas piled up one af­ter an­other. I would trans­port a sub­tle re­ply by my op­po­nent, which worked in one case, to an­other situ­a­tion where it would nat­u­rally prove to be quite use­less. As a re­sult my head be­came filled with a com­pletely chaotic pile of all sorts of moves, and the in­fa­mous “tree of vari­a­tions”, from which the chess train­ers recom­mend that you cut off the small branches, in this case spread with un­be­liev­able ra­pidity.
And then sud­denly, for some rea­son, I re­mem­bered the clas­sic cou­plet by Kor­ney Ivanovic Chukovsky: “Oh, what a difficult job it was. To drag out of the marsh the hip­popota­mus”. I don’t know from what as­so­ci­a­tions the hip­popota­mus got into the chess board, but al­though the spec­ta­tors were con­vinced that I was con­tin­u­ing to study the po­si­tion, I, de­spite my hu­man­i­tar­ian ed­u­ca­tion, was try­ing at this time to work out: just how WOULD you drag a hip­popota­mus out of the marsh ? I re­mem­ber how jacks figured in my thoughts, as well as lev­ers, he­li­copters, and even a rope lad­der. After a lengthy con­sid­er­a­tion I ad­mit­ted defeat as an en­g­ineer, and thought spite­fully to my­self: “Well, just let it drown!” And sud­denly the hip­popota­mus dis­ap­peared. Went right off the chess­board just as he had come on … of his own ac­cord!
And straight­away the po­si­tion did not ap­pear to be so com­pli­cated. Now I some­how re­al­ized that it was not pos­si­ble to calcu­late all the vari­a­tions, and that the knight sac­ri­fice was, by its very na­ture, purely in­tu­itive. And since it promised an in­ter­est­ing game, I could not re­frain from mak­ing it.

But this is a some­what con­trived ex­am­ple since this is rem­i­nis­cent of the pre-rigor, rigor, and post-rigor phases of Math­e­mat­ics (or more gen­er­ally, in mas­ter­ing any skill). And one could ar­gue chess GMs have so thor­oughly mas­tered the lower lev­els that they can af­ford to skip them with­out mak­ing catas­trophic er­rors.

Another ex­am­ple that comes to mind is Marc An­dreessen in the in­tro­duc­tion to Break­ing Smart:

In 2007, right be­fore the first iPhone launched, I asked Steve Jobs the ob­vi­ous ques­tion: The de­sign of the iPhone was based on dis­card­ing ev­ery phys­i­cal in­ter­face el­e­ment ex­cept for a touch­screen. Would users be will­ing to give up the then-dom­i­nant phys­i­cal key­pads for a soft key­board?
His an­swer was brusque: “They’ll learn.”

It seems quite clear that Jobs wasn’t ap­ply­ing in­tu­ition at the low­est level here. And it seems like the end re­sult could have ended up worse off if he ended up ap­ply­ing in­tu­ition at lower lev­els. He even ex­plic­itly says:

You can’t con­nect the dots look­ing for­ward; you can only con­nect them look­ing back­wards. So you have to trust that the dots will some­how con­nect in your fu­ture. You have to trust in some­thing—your gut, des­tiny, life, karma, what­ever. This ap­proach has never let me down, and it has made all the differ­ence in my life.

I find nei­ther ex­am­ples I came up with con­vinc­ing. But are there cir­cum­stances where ap­ply­ing in­tu­ition at lower lev­els is a strate­gic mis­take?

• I find nei­ther ex­am­ples I came up with con­vinc­ing. But are there cir­cum­stances where ap­ply­ing in­tu­ition at lower lev­els is a strate­gic mis­take?

Ap­ply­ing in­tu­ition at lower lev­els is a strate­gic mis­take when you are sub­stan­tially more cer­tain that your high-level in­tu­ition is well-honed, than you are of your abil­ity to ex­plic­itly de­com­pose the high level into lower-level com­po­nents.

(It can also be a strate­gic mis­take for com­pu­ta­tional-cost rea­sons, as I out­line in my other com­ment.)

• Also, why is the Steve Jobs ex­am­ple un­con­vinc­ing? It seems, in fact, an ex­am­ple of the sort of thing I am talk­ing about.

Here’s some­thing that Bruce Tog­nazz­ini (HCI ex­pert and au­thor of the fa­mous Ap­ple Hu­man In­ter­face Guidelines) said about Steve Jobs:

Steve Jobs was also one of the great­est hu­man-com­puter in­ter­ac­tion de­sign­ers of all time, though he would have adamantly de­nied it. (That’s one of Ap­ple’s prob­lems to­day. They lost the only HCI de­signer with any power in the en­tire com­pany the day Steve died, and they don’t even know it.)

Had you asked Steve Jobs to break down his in­tu­itions into lower-level com­po­nents, and then eval­u­ate those, he may well have failed. And yet he made in­cred­ible, ground­break­ing, vi­sion­ary prod­ucts, again and again and again. He had good rea­son to be con­fi­dent in his high-level in­tu­itions. Why would he want to dis­card those, and at­tempt a lower-level anal­y­sis?

• I had worded it some­what poorly, I wasn’t in­tend­ing to say that Steve Jobs should have at­tempted a lower level anal­y­sis in tech­nol­ogy de­sign.

I just found it un­con­vinc­ing in the sense that I couldn’t think of an ex­am­ple where ap­ply­ing lower level in­tu­itions was a strate­gic mis­take for me in par­tic­u­lar. As you men­tion in your other com­ment, I am not sub­stan­tially more cer­tain that my high-level in­tu­ition is well-honed in any par­tic­u­lar dis­ci­pline.

More gen­er­ally, Steve Jobs’ con­sis­tently ap­plied high-level in­tu­ition to big life de­ci­sions too ― as ev­i­denced by his com­mence­ment speech. It on the whole worked out for him I guess, but he also did try to cure his can­cer with al­ter­na­tive medicine which he later re­gret­ted.