Explanation vs Rationalization

Follow-up to: Toward a New Technical Explanation of Technical Explanation, The Bottom Line.

In The Bottom Line, Eliezer argues that arguments should only provide evidence to the extent that their conclusions were determined in a way which correlated them with reality. If you write down your conclusion at the bottom of the page, and then construct your argument, your argument does nothing to make the conclusion more entangled with reality.

This isn’t precisely true. If you know that someone tried really hard to put together all the evidence for their side, and you still find the argument underwhelming you should probably update against what they’re arguing. Similarly, if a motivated arguer finds a surprisingly compelling argument with much less effort than you expected, this should update you toward what they claim. So, you can still get evidence from the arguments of motivated reasoners, if you adjust for base rates of the argument quality you expected from them.

Still, motivated reasoning is bad for discourse, and aspiring rationalists seek to minimize it.

Yet, I think everyone has had the experience of trying to explain something and looking for arguments which will help the other person to get it. This is different than trying to convince /​ win an argument, right? I have been uneasy about this for a long time. Trying to find a good explanation is a lot like motivated cognition. Yet, trying to explain something to someone doesn’t seem like it is wrong in the same way, does it?

A possible view which occurred to me is that you should only give the line of reasoning which originally convinced you. That way, you’re sure you aren’t selecting evidence; the evidence is selecting what you argue.

I think this captures some of the right attitude, but is certainly too strict. Teachers couldn’t use this rule, since it is prudent to select good explanations rather than whichever explanation you heard first. I think the rule would also be bad for math research: looking for a proof is, mostly, a better use of your time than trying to articulate the mathematical intuitions which lead to a conjecture.

A second attempt to resolve the conflict: you must adopt different conversational modes for efficiently conveying information vs collaboratively exploring the truth. It’s fine to make motivated arguments when you’re trying to explain things well, but you should avoid them like the plague if you’re trying to find out what’s true in the first place.

I also think this isn’t quite right, partly because I think good teaching is more like collaborative truth exploration, and partly because of the math research example I already mentioned.

I think this is what’s going on: you’re OK if you’re looking for a gears-level explanation. Since gears-level explanations are more objective, it is harder to bend them with motivated cognition. They’re also a handier form of knowledge to pass around from person to person, since they tend to be small and easily understood.

In the case of a mathematician who has a conjecture, a proof is a rigorous explanation which is quite unlikely to be wrong. You can think of looking for a proof as a way of checking the conjecture, sure; in that respect it might not seem like motivated cognition at all. However, that’s if you doubt your conjecture and are looking for the proof as a test. I think there’s also a case where you don’t doubt your conjecture, and are looking for a proof to convince others. You might still change your mind if you can’t find one, but the point is you weren’t wrong to search for a proof with the motive to convince—because of the rigorous nature of proofs, there is no selection-of-evidence problem.

If you are a physicist, and I ask what would happen if I do a certain thing with gyroscopes, you might give a quick answer without needing to think much. If I’m not convinced, you might proceed to try and convince me by explaining which physical principles are in play. You’re doing something which looks like motivated cognition, but it isn’t much of a problem because it isn’t so easy to argue wrong conclusions from physical principles (if both of us are engaging with the arguments at a gears level). If I ask you to tell we what reasoning actually produced your quick answer rather than coming up with arguments, you might have nothing better to say than “intuition from long experience playing with gyroscopes and thinking about the physics”.

If you are an expert of interior design, and tell me where I should put my couch, I might believe you, but still ask for an argument. Your initial statement may have been intuitive, but it isn’t wrong for you to try and come up with more explicit reasons. Maybe you’ll just come up with motivated arguments—and you should watch out for that—but maybe you’ll articulate a model, not too far from your implicit reasoning, in which the couch just obviously does belong in that spot.

There’s a lot of difference between math, physics, and interior design in terms of the amount of wiggle room gears-level arguments might have. There’s almost no room for motivated arguments in formal proofs. There’s lots of room in interior design. Physics is somewhere in between. I don’t know how to cleanly distinguish in practice, so that we can have a nice social norm against motivated cognition while allowing explanations. (People seem to mostly manage on their own; I don’t actually see so many people shutting down attempted explanations by labeling them motivated cognition.) Perhaps being aware of the distinction is enough.

The distinction is also helpful for explaining why you might want more information when you already believe someone. It’s easy for me to speak from my gears level model and sound like I don’t believe you yet, when really I’m just asking for an explanation. “Agents should maximize expected utility!” you say. “Convince me!” I say. “VNM Theorem!” you say. “What’s the proof?” I say. You can’t necessarily tell if I’m being skeptical or curious. We can convey more nuanced epistemics by saying things like “I trust you on things like this, but I don’t have your models” or “OK, can you explain why?”

Probabilistic evidence provides nudges in one direction or another (sometimes strong, sometimes weak). These can be filtered by a clever arguer, collecting nudges in one direction and discarding the rest, to justify what they want you to believe. However, if this kind of probabilistic reasoning is like floating in a raft on the sea, a gears-level explanation is like finding firm land to stand on. Mathematics is bedrock; physics is firm soil; other subjects may be like shifting sand (it’s all fake frameworks to greater/​lesser extent) -- but it’s more steady than water!