Rationalization

In “The Bot­tom Line”, I pre­sented the dilemma of two boxes only one of which con­tains a di­a­mond, with var­i­ous signs and por­tents as ev­i­dence. I di­chotomized the cu­ri­ous in­quirer and the clever ar­guer. The cu­ri­ous in­quirer writes down all the signs and por­tents, and pro­cesses them, and fi­nally writes down “There­fore, I es­ti­mate an 85% prob­a­bil­ity that box B con­tains the di­a­mond.” The clever ar­guer works for the high­est bid­der, and be­gins by writ­ing, “There­fore, box B con­tains the di­a­mond”, and then se­lects fa­vor­able signs and por­tents to list on the lines above.

The first pro­ce­dure is ra­tio­nal­ity. The sec­ond pro­ce­dure is gen­er­ally known as “ra­tio­nal­iza­tion”.

“Ra­tion­al­iza­tion.” What a cu­ri­ous term. I would call it a wrong word. You can­not “ra­tio­nal­ize” what is not already ra­tio­nal. It is as if “ly­ing” were called “truthiza­tion”.

On a purely com­pu­ta­tional level, there is a rather large differ­ence be­tween:

  1. Start­ing from ev­i­dence, and then crunch­ing prob­a­bil­ity flows, in or­der to out­put a prob­a­ble con­clu­sion. (Writ­ing down all the signs and por­tents, and then flow­ing for­ward to a prob­a­bil­ity on the bot­tom line which de­pends on those signs and por­tents.)

  2. Start­ing from a con­clu­sion, and then crunch­ing prob­a­bil­ity flows, in or­der to out­put ev­i­dence ap­par­ently fa­vor­ing that con­clu­sion. (Writ­ing down the bot­tom line, and then flow­ing back­ward to se­lect signs and por­tents for pre­sen­ta­tion on the lines above.)

What fool de­vised such con­fus­ingly similar words, “ra­tio­nal­ity” and “ra­tio­nal­iza­tion”, to de­scribe such ex­traor­di­nar­ily differ­ent men­tal pro­cesses? I would pre­fer terms that made the al­gorith­mic differ­ence ob­vi­ous, like “ra­tio­nal­ity” ver­sus “gi­ant suck­ing cog­ni­tive black hole”.

Not ev­ery change is an im­prove­ment, but ev­ery im­prove­ment is nec­es­sar­ily a change. You can­not ob­tain more truth for a fixed propo­si­tion by ar­gu­ing it; you can make more peo­ple be­lieve it, but you can­not make it more true. To im­prove our be­liefs, we must nec­es­sar­ily change our be­liefs. Ra­tion­al­ity is the op­er­a­tion that we use to ob­tain more truth-value for our be­liefs by chang­ing them. Ra­tion­al­iza­tion op­er­ates to fix be­liefs in place; it would be bet­ter named “anti-ra­tio­nal­ity”, both for its prag­matic re­sults and for its re­versed al­gorithm.

“Ra­tion­al­ity” is the for­ward flow that gath­ers ev­i­dence, weighs it, and out­puts a con­clu­sion. The cu­ri­ous in­quirer used a for­ward-flow al­gorithm: first gath­er­ing the ev­i­dence, writ­ing down a list of all visi­ble signs and por­tents, which they then pro­cessed for­ward to ob­tain a pre­vi­ously un­known prob­a­bil­ity for the box con­tain­ing the di­a­mond. Dur­ing the en­tire time that the ra­tio­nal­ity-pro­cess was run­ning for­ward, the cu­ri­ous in­quirer did not yet know their des­ti­na­tion, which was why they were cu­ri­ous. In the Way of Bayes, the prior prob­a­bil­ity equals the ex­pected pos­te­rior prob­a­bil­ity: If you know your des­ti­na­tion, you are already there.

“Ra­tion­al­iza­tion” is a back­ward flow from con­clu­sion to se­lected ev­i­dence. First you write down the bot­tom line, which is known and fixed; the pur­pose of your pro­cess­ing is to find out which ar­gu­ments you should write down on the lines above. This, not the bot­tom line, is the vari­able un­known to the run­ning pro­cess.

I fear that Tra­di­tional Ra­tion­al­ity does not prop­erly sen­si­tize its users to the differ­ence be­tween for­ward flow and back­ward flow. In Tra­di­tional Ra­tion­al­ity, there is noth­ing wrong with the sci­en­tist who ar­rives at a pet hy­poth­e­sis and then sets out to find an ex­per­i­ment that proves it. A Tra­di­tional Ra­tion­al­ist would look at this ap­prov­ingly, and say, “This pride is the en­g­ine that drives Science for­ward.” Well, it is the en­g­ine that drives Science for­ward. It is eas­ier to find a pros­e­cu­tor and defen­der bi­ased in op­po­site di­rec­tions, than to find a sin­gle un­bi­ased hu­man.

But just be­cause ev­ery­one does some­thing, doesn’t make it okay. It would be bet­ter yet if the sci­en­tist, ar­riv­ing at a pet hy­poth­e­sis, set out to test that hy­poth­e­sis for the sake of cu­ri­os­ity—cre­at­ing ex­per­i­ments that would drive their own be­liefs in an un­known di­rec­tion.

If you gen­uinely don’t know where you are go­ing, you will prob­a­bly feel quite cu­ri­ous about it. Cu­ri­os­ity is the first virtue, with­out which your ques­tion­ing will be pur­pose­less and your skills with­out di­rec­tion.

Feel the flow of the Force, and make sure it isn’t flow­ing back­wards.