Explanation vs Rationalization

Fol­low-up to: Toward a New Tech­ni­cal Ex­pla­na­tion of Tech­ni­cal Ex­pla­na­tion, The Bot­tom Line.

In The Bot­tom Line, Eliezer ar­gues that ar­gu­ments should only provide ev­i­dence to the ex­tent that their con­clu­sions were de­ter­mined in a way which cor­re­lated them with re­al­ity. If you write down your con­clu­sion at the bot­tom of the page, and then con­struct your ar­gu­ment, your ar­gu­ment does noth­ing to make the con­clu­sion more en­tan­gled with re­al­ity.

This isn’t pre­cisely true. If you know that some­one tried re­ally hard to put to­gether all the ev­i­dence for their side, and you still find the ar­gu­ment un­der­whelming you should prob­a­bly up­date against what they’re ar­gu­ing. Similarly, if a mo­ti­vated ar­guer finds a sur­pris­ingly com­pel­ling ar­gu­ment with much less effort than you ex­pected, this should up­date you to­ward what they claim. So, you can still get ev­i­dence from the ar­gu­ments of mo­ti­vated rea­son­ers, if you ad­just for base rates of the ar­gu­ment qual­ity you ex­pected from them.

Still, mo­ti­vated rea­son­ing is bad for dis­course, and as­piring ra­tio­nal­ists seek to min­i­mize it.

Yet, I think ev­ery­one has had the ex­pe­rience of try­ing to ex­plain some­thing and look­ing for ar­gu­ments which will help the other per­son to get it. This is differ­ent than try­ing to con­vince /​ win an ar­gu­ment, right? I have been un­easy about this for a long time. Try­ing to find a good ex­pla­na­tion is a lot like mo­ti­vated cog­ni­tion. Yet, try­ing to ex­plain some­thing to some­one doesn’t seem like it is wrong in the same way, does it?

A pos­si­ble view which oc­curred to me is that you should only give the line of rea­son­ing which origi­nally con­vinced you. That way, you’re sure you aren’t se­lect­ing ev­i­dence; the ev­i­dence is se­lect­ing what you ar­gue.

I think this cap­tures some of the right at­ti­tude, but is cer­tainly too strict. Teach­ers couldn’t use this rule, since it is pru­dent to se­lect good ex­pla­na­tions rather than whichever ex­pla­na­tion you heard first. I think the rule would also be bad for math re­search: look­ing for a proof is, mostly, a bet­ter use of your time than try­ing to ar­tic­u­late the math­e­mat­i­cal in­tu­itions which lead to a con­jec­ture.

A sec­ond at­tempt to re­solve the con­flict: you must adopt differ­ent con­ver­sa­tional modes for effi­ciently con­vey­ing in­for­ma­tion vs col­lab­o­ra­tively ex­plor­ing the truth. It’s fine to make mo­ti­vated ar­gu­ments when you’re try­ing to ex­plain things well, but you should avoid them like the plague if you’re try­ing to find out what’s true in the first place.

I also think this isn’t quite right, partly be­cause I think good teach­ing is more like col­lab­o­ra­tive truth ex­plo­ra­tion, and partly be­cause of the math re­search ex­am­ple I already men­tioned.

I think this is what’s go­ing on: you’re OK if you’re look­ing for a gears-level ex­pla­na­tion. Since gears-level ex­pla­na­tions are more ob­jec­tive, it is harder to bend them with mo­ti­vated cog­ni­tion. They’re also a hand­ier form of knowl­edge to pass around from per­son to per­son, since they tend to be small and eas­ily un­der­stood.

In the case of a math­e­mat­i­cian who has a con­jec­ture, a proof is a rigor­ous ex­pla­na­tion which is quite un­likely to be wrong. You can think of look­ing for a proof as a way of check­ing the con­jec­ture, sure; in that re­spect it might not seem like mo­ti­vated cog­ni­tion at all. How­ever, that’s if you doubt your con­jec­ture and are look­ing for the proof as a test. I think there’s also a case where you don’t doubt your con­jec­ture, and are look­ing for a proof to con­vince oth­ers. You might still change your mind if you can’t find one, but the point is you weren’t wrong to search for a proof with the mo­tive to con­vince—be­cause of the rigor­ous na­ture of proofs, there is no se­lec­tion-of-ev­i­dence prob­lem.

If you are a physi­cist, and I ask what would hap­pen if I do a cer­tain thing with gy­ro­scopes, you might give a quick an­swer with­out need­ing to think much. If I’m not con­vinced, you might pro­ceed to try and con­vince me by ex­plain­ing which phys­i­cal prin­ci­ples are in play. You’re do­ing some­thing which looks like mo­ti­vated cog­ni­tion, but it isn’t much of a prob­lem be­cause it isn’t so easy to ar­gue wrong con­clu­sions from phys­i­cal prin­ci­ples (if both of us are en­gag­ing with the ar­gu­ments at a gears level). If I ask you to tell we what rea­son­ing ac­tu­ally pro­duced your quick an­swer rather than com­ing up with ar­gu­ments, you might have noth­ing bet­ter to say than “in­tu­ition from long ex­pe­rience play­ing with gy­ro­scopes and think­ing about the physics”.

If you are an ex­pert of in­te­rior de­sign, and tell me where I should put my couch, I might be­lieve you, but still ask for an ar­gu­ment. Your ini­tial state­ment may have been in­tu­itive, but it isn’t wrong for you to try and come up with more ex­plicit rea­sons. Maybe you’ll just come up with mo­ti­vated ar­gu­ments—and you should watch out for that—but maybe you’ll ar­tic­u­late a model, not too far from your im­plicit rea­son­ing, in which the couch just ob­vi­ously does be­long in that spot.

There’s a lot of differ­ence be­tween math, physics, and in­te­rior de­sign in terms of the amount of wig­gle room gears-level ar­gu­ments might have. There’s al­most no room for mo­ti­vated ar­gu­ments in for­mal proofs. There’s lots of room in in­te­rior de­sign. Physics is some­where in be­tween. I don’t know how to cleanly dis­t­in­guish in prac­tice, so that we can have a nice so­cial norm against mo­ti­vated cog­ni­tion while al­low­ing ex­pla­na­tions. (Peo­ple seem to mostly man­age on their own; I don’t ac­tu­ally see so many peo­ple shut­ting down at­tempted ex­pla­na­tions by la­bel­ing them mo­ti­vated cog­ni­tion.) Per­haps be­ing aware of the dis­tinc­tion is enough.

The dis­tinc­tion is also helpful for ex­plain­ing why you might want more in­for­ma­tion when you already be­lieve some­one. It’s easy for me to speak from my gears level model and sound like I don’t be­lieve you yet, when re­ally I’m just ask­ing for an ex­pla­na­tion. “Agents should max­i­mize ex­pected util­ity!” you say. “Con­vince me!” I say. “VNM The­o­rem!” you say. “What’s the proof?” I say. You can’t nec­es­sar­ily tell if I’m be­ing skep­ti­cal or cu­ri­ous. We can con­vey more nu­anced epistemics by say­ing things like “I trust you on things like this, but I don’t have your mod­els” or “OK, can you ex­plain why?”

Prob­a­bil­is­tic ev­i­dence pro­vides nudges in one di­rec­tion or an­other (some­times strong, some­times weak). Th­ese can be filtered by a clever ar­guer, col­lect­ing nudges in one di­rec­tion and dis­card­ing the rest, to jus­tify what they want you to be­lieve. How­ever, if this kind of prob­a­bil­is­tic rea­son­ing is like float­ing in a raft on the sea, a gears-level ex­pla­na­tion is like find­ing firm land to stand on. Math­e­mat­ics is bedrock; physics is firm soil; other sub­jects may be like shift­ing sand (it’s all fake frame­works to greater/​lesser ex­tent) -- but it’s more steady than wa­ter!