Intuition and Mathematics

While read­ing the an­swer to the ques­tion ‘What is it like to have an un­der­stand­ing of very ad­vanced math­e­mat­ics?’ I be­came cu­ri­ous about the value of in­tu­ition in math­e­mat­ics and why it might be use­ful.

It usu­ally seems to be a bad idea to try to solve prob­lems in­tu­itively or use our in­tu­ition as ev­i­dence to judge is­sues that our evolu­tion­ary an­ces­tors never en­coun­tered and there­fore were never op­ti­mized to judge by nat­u­ral se­lec­tion.

And so it seems to be es­pe­cially strange to sug­gest that in­tu­ition might be a good tool to make math­e­mat­i­cal con­jec­tures. Yet peo­ple like fields medal­ist Ter­ence Tao seem to be­lieve that in­tu­ition should not be dis­re­garded when do­ing math­e­mat­ics,

...“fuzzier” or “in­tu­itive” think­ing (such as heuris­tic rea­son­ing, ju­di­cious ex­trap­o­la­tion from ex­am­ples, or analo­gies with other con­texts such as physics) gets de­p­re­cated as “non-rigor­ous”. All too of­ten, one ends up dis­card­ing one’s ini­tial in­tu­ition and is only able to pro­cess math­e­mat­ics at a for­mal level, thus get­ting stalled at the sec­ond stage of one’s math­e­mat­i­cal ed­u­ca­tion.

The point of rigour is not to de­stroy all in­tu­ition; in­stead, it should be used to de­stroy bad in­tu­ition while clar­ify­ing and ele­vat­ing good in­tu­ition. It is only with a com­bi­na­tion of both rigor­ous for­mal­ism and good in­tu­ition that one can tackle com­plex math­e­mat­i­cal prob­lems;

The au­thor men­tioned at the be­gin­ning also makes the case that in­tu­ition is an im­por­tant tool,

You are of­ten con­fi­dent that some­thing is true long be­fore you have an air­tight proof for it (this hap­pens es­pe­cially of­ten in ge­om­e­try). The main rea­son is that you have a large cat­a­logue of con­nec­tions be­tween con­cepts, and you can quickly in­tuit that if X were to be false, that would cre­ate ten­sions with other things you know to be true, so you are in­clined to be­lieve X is prob­a­bly true to main­tain the har­mony of the con­cep­tual space. It’s not so much that you can imag­ine the situ­a­tion perfectly, but you can quickly imag­ine many other things that are log­i­cally con­nected to it.

But what do those peo­ple mean when they talk about ‘in­tu­ition’, what ex­actly is its ad­van­tage? The au­thor hints at an an­swer,

You go up in ab­strac­tion, “higher and higher”. The main ob­ject of study yes­ter­day be­comes just an ex­am­ple or a tiny part of what you are con­sid­er­ing to­day. For ex­am­ple, in calcu­lus classes you think about func­tions or curves. In func­tional anal­y­sis or alge­braic ge­om­e­try, you think of spaces whose points are func­tions or curves—that is, you “zoom out” so that ev­ery func­tion is just a point in a space, sur­rounded by many other “nearby” func­tions. Us­ing this kind of zoom­ing out tech­nique, you can say very com­plex things in short sen­tences—things that, if un­packed and said at the zoomed-in level, would take up pages. Ab­stract­ing and com­press­ing in this way al­lows you to con­sider ex­tremely com­pli­cated is­sues while us­ing your limited mem­ory and pro­cess­ing power.

At this point I was re­minded of some­thing Scott Aaron­son wrote in his es­say ‘Why Philoso­phers Should Care About Com­pu­ta­tional Com­plex­ity’,

...even if com­put­ers were bet­ter than hu­mans at fac­tor­ing large num­bers or at solv­ing ran­domly-gen­er­ated Su­doku puz­zles, hu­mans might still be bet­ter at search prob­lems with “higher-level struc­ture” or “se­man­tics,” such as prov­ing Fer­mat’s Last The­o­rem or (iron­i­cally) de­sign­ing faster com­puter al­gorithms. In­deed, even in limited do­mains such as puz­zle-solv­ing, while com­put­ers can ex­am­ine solu­tions mil­lions of times faster, hu­mans (for now) are vastly bet­ter at notic­ing global pat­terns or sym­me­tries in the puz­zle that make a solu­tion ei­ther triv­ial or im­pos­si­ble. As an amus­ing ex­am­ple, con­sider the Pi­geon­hole Prin­ci­ple, which says that n+1 pi­geons can’t be placed into n holes, with at most one pi­geon per hole. It’s not hard to con­struct a propo­si­tional Boolean for­mula Φ that en­codes the Pi­geon­hole Prin­ci­ple for some fixed value of n (say, 1000). How­ever, if you then feed Φ to cur­rent Boolean satis­fi­a­bil­ity al­gorithms, they’ll as­si­du­ously set to work try­ing out pos­si­bil­ities: “let’s see, if I put this pi­geon here, and that one there … darn, it still doesn’t work!” And they’ll con­tinue try­ing out pos­si­bil­ities for an ex­po­nen­tial num­ber of steps, oblivi­ous to the “global” rea­son why the goal can never be achieved. In­deed, be­gin­ning in the 1980s, the field of proof com­plex­ity—a close cousin of com­pu­ta­tional com­plex­ity—has been able to show that large classes of al­gorithms re­quire ex­po­nen­tial time to prove the Pi­geon­hole Prin­ci­ple and similar propo­si­tional tau­tolo­gies.

Again back to the an­swer on ‘what it is like to have an un­der­stand­ing of very ad­vanced math­e­mat­ics’. The au­thor writes,

...you are good at mod­u­lariz­ing a con­cep­tual space and tak­ing cer­tain calcu­la­tions or ar­gu­ments you don’t un­der­stand as “black boxes” and con­sid­er­ing their im­pli­ca­tions any­way. You can some­times make state­ments you know are true and have good in­tu­ition for, with­out un­der­stand­ing all the de­tails. You can of­ten de­tect where the del­i­cate or in­ter­est­ing part of some­thing is based on only a very high-level ex­pla­na­tion.

Hu­mans are good at ‘zoom­ing out’ to de­tect global pat­terns. Hu­mans can jump con­cep­tual gaps by treat­ing them as “black boxes”.

In­tu­ition is a con­cep­tual bird’s-eye view that al­lows hu­mans to draw in­fer­ences from high-level ab­strac­tions with­out hav­ing to sys­tem­at­i­cally trace out each step. In­tu­ition is a worm­hole. In­tu­ition al­lows us get from here to there given limited com­pu­ta­tional re­sources.

If true, it also ex­plains many of our short­com­ings and bi­ases. In­tu­itions great­est fea­ture is also our biggest flaw.

The in­tro­duc­tion of suit­able ab­strac­tions is our only men­tal aid to or­ga­nize and mas­ter com­plex­ity. — Eds­ger W. Dijkstra

Our com­pu­ta­tional limi­ta­tions make it nec­es­sary to take short­cuts and view the world as a sim­plified model. That heuris­tic is nat­u­rally prone to er­ror and in­tro­duces bi­ases. We draw con­nec­tions with­out es­tab­lish­ing them sys­tem­at­i­cally. We rec­og­nize pat­terns in ran­dom noise.

Many of our bi­ases can be seen as a side-effect of mak­ing judg­ments un­der com­pu­ta­tional re­stric­tions. A trade off be­tween op­ti­miza­tion power and re­source use.

It it pos­si­ble to cor­rect for the short­com­ings of in­tu­ition other than by re­fin­ing ra­tio­nal­ity and be­com­ing aware of our bi­ases? That’s up to how op­ti­miza­tion power scales with re­sources and if there are more effi­cient al­gorithms that work un­der limited re­sources.