Argument, intuition, and recursion

Mathematicians answer clean questions that can be settled with formal argument. Scientists answer empirical questions that can be settled with experimentation.

Collective epistemology is hard in domains where it’s hard to settle disputes with either formal argument or experimentation (or a combination), like policy or futurism.

I think that’s where rationalists could add value, but first we have to grapple with a basic question: if you can’t settle the question with logic, and you can’t check your intuitions against reality to see how accurate they are, then what are you even doing?

In this post I’ll explain how I think about that question. For those who are paying close attention, it’s similar to one or two of my previous posts (e.g. 1 2 3 4 5...).

I. An example

An economist might answer a simple question (“what is the expected employment effect of a steel tariff?”) by setting up an econ 101 model and calculating equilibria.

After setting up enough simple models, they can develop intuitions and heuristics that roughly predict the outcome without actually doing the calculation.

These intuitions won’t be as accurate as intuitions trained against the real world—if our economist could observe the impact of thousands of real economic interventions, they should do that instead (and in the case of economics, you often can). But the intuition isn’t vacuous either: it’s a fast approximation of econ 101 models.

Once our economist has built up econ 101 intuitions, they can consider more nuanced arguments that leverage those fast intuitive judgments. For example, they could consider possible modifications to their simple model of steel tariffs (like labor market frictions), use their intuition to quickly evaluate each modification, and see which modifications actually affect the simple model’s conclusion.

After going through enough nuanced arguments, they can develop intuitions and heuristics that predict these outcomes. For example, they can learn to predict which assumptions are most important to a simple model’s conclusions.

Equipped with these stronger intuitions, our economist can use them to get better answers: they can construct more robust models, explore the most important assumptions, design more effective experiments, and so on.

(Eventually our economist will improve their intuitions further by predicting these better answers; they can use the new intuitions to answer more complex questions....)

Any question that can be answered by this procedure could eventually be answered using econ 101 directly. But with every iteration of intuition-building, the complexity of the underlying econ 101 explanation increases geometrically. This process won’t reveal any truths beyond those implicit in the econ 101 assumptions, but it can do a good job of efficiently exploring the logical consequences of those assumptions.

(In practice, an economist’s intuitions should incorporate both theoretical argument and relevant data, but that doesn’t change the basic picture.)

II. The process

The same recursive process is responsible for most of my intuitions about futurism. I don’t get to test my intuition by actually peeking at the world in 20 years. But I can consider explicit arguments and use them to refine my intuitions—even if evaluating arguments requires using my current intuitions.

For example, when I think about takeoff speeds I’m faced with questions like “how much should we infer from the difference between chimps and humans?” It’s not tractable to answer all of these subquestions in detail, so for a first pass I use my intuition to answer each subquestion.

Eventually it’s worthwhile to explore some of those subquestions in more depth, e.g. I might choose to explore the analogy between chimps and humans in more depth. In the process run into sub-sub-questions, like “to what extent is evolution optimizing for the characteristics that changed discontinuously between chimps and humans?” I initially answer those subquestions with intuition but might sometimes expand them in the same way, turning up sub-sub-sub-questions...

When I examine the arguments for a question Q, I use my current intuition to answer the subquestions that I encounter. Once I get an answer for Q, I do two things:

  • I update my cached belief about Q, to reflect the new things I’ve learned.

  • If my new belief differs from my original intuition, I update my intuition. My intuitions generalize across cases, so this will affect my view on lots of other questions.

A naive description of reasoning only talks about the first kind of update. But I think that the second kind is where 99% of the important stuff happens.

(There isn’t any bright line between these two cases. A “cached answer” is just a very specific kind of intuition, and in practice the extreme case of seeing the exact question multiple times is mostly irrelevant. For example, it’s not helpful to have a cached answer to “how fast will AI takeoff be?”; instead I have a cluster of intuitions that generate answers to a hundred different variants of that question.)

The second kind of update can come in lots of flavors. Some examples:

  • When I make an intuitive judgment I have to weigh lots of different factors: my own snap judgment, others’ views, various heuristic arguments, various analogies, etc. I set these weights partly based on empirical predictions but largely based on predicting the result of arguments. For example, in many contexts I’d lean heavily on Carl or Holden’s views, based on them systematically predicting the views that I’d hold after exploring arguments in more detail.

  • I have many explicit heuristics or high-level principles of reasoning that have been refined to predict the results of more detailed arguments. For example, I often use a cluster of “anti-fanaticism” heuristics, against assigning unbounded ratios between the importance of different considerations. This is not actually a simple general principle to state, and it’s not supported by a general argument, instead I have an intuitive sense of when the heuristic applies.

  • My unconscious judgments are significantly optimized to predict the result of longer arguments. This is most obvious in cases like mathematics—for example, I have a well-developed intuitions about duality and the Fourier transform that lets me answer hard questions, which was refined almost entirely by practice. Intuitions are harder to see (and less reliable) in cases like economics of foom or robustness of RL to function approximators, but something basically similar is going on.

Note that none of these have independent evidential value, they would be screened off by exploring the arguments in enough detail. But in practice it’s pretty hard to do that, and in many cases might be computationally infeasible.

Like the economist in the example, I would do better by updating my intuitions against the real world. But in many domains there just isn’t that much data—we only get to see one year of the future per year, and policy experiments can be very expensive—and this approach allows us to stretch the data we have by incorporating an increasing range of logical consequences.

III. Disclaimer

The last section is partly a positive description of how I actually reason and partly a normative description of how I believe people should reason. In the next section I’ll try to turn it into a collective epistemology.

I’ve found this framework useful for clarifying my own thinking about thinking. Unfortunately, I can’t give you much empirical evidence that it works well.

Even if this approach was the best thing since sliced bread, I think that empirically demonstrating that it helps would still be a massive scientific project. So I hope I can be forgiven for a lack of empirical rigor. But you should still take everything with a grain of salt.

And I want to stress: I don’t mean to devalue diving deeply into arguments and fleshing them out as much as possible. I think it’s usually impossible to get all the way to a mathematical argument, but you can take a pretty giant step from your initial intuitions. Though I talk about “one step backups” in the above examples for simplicity, I think that updating on really big steps is often a better idea. Moreover, if we want to have the best view we can on a particular question, it’s clearly worth unpacking the arguments as much as we can. (In fact the argument in this post should make you unpack arguments more, since in addition to the object-level benefit you also benefit from building stronger transferrable intuitions.)

IV. Disagreement

Suppose Alice and Bob disagree about a complicated question—say AI timelines—and they’d like to learn from each other.

A common (implicit) hope is to exhaustively explore the tree of arguments and counterarguments, following a trail of higher-level disagreements to each low-level disagreement. If Alice and Bob mostly have similar intuitions, but they’ve considered different arguments or have different empirical evidence, then this process can highlight the difference and they can sometimes reach agreement.

Often this doesn’t work because Alice and Bob have wildly different intuitions about a whole bunch of different questions. I think that in a complicated argument, the number of subquestions about which Alice and Bob can be astronomically large, and there is zero hope for resolving any significant fraction of them. What to do then?

Here’s one possible strategy. Let’s suppose for simplicity that Alice and Bob disagree, and that an outside observer Judy is interested in learning about the truth of the matter (the identical procedure works if Judy is actually one of Alice and Bob). Then:

Alice explains her view on the top level question, in terms of her answers to simpler subquestions. Bob likely disagrees with some of these steps. If there is disagreement, Alice and Bob talk until they “agree to disagree”—they make sure that they are using the subquestion to mean the same thing, and that they’ve updated on each others’ beliefs (and whatever cursory arguments each of them is willing to make about the claim). Then Alice and Bob find their most significant disagreement and recursively apply the same process to that disagreement.

They repeat this process until they reach a state where they don’t have any significant disagreements about subclaims (potentially because there are none, and the claim is so simple that Judy feels confident she can assess its truth directly).

Hopefully at this point Alice and Bob can reach agreement, or else identify some implicit subquestion about which they disagree. But if not, that’s OK too. Ultimately Judy is the arbiter of truth. Every time Alice and Bob have been disagreeing, they have been making a claim about what Judy will ultimately believe.

The reason we were exploring this claim was because Alice and Bob disagreed significantly before we unpacked the details. Now at least one of Alice and Bob learns that they were wrong, and both of them can update their intuitions (including their intuitions for how much to respect each others’ opinions in different kinds of cases).

Alice and Bob then start the process over with their new intuitions. The new process might involve pursuing a nearly-identical set of disagreements (which they can do extremely quickly), but at some point it will take a different turn.

If you run this process enough times, eventually (at least one of) Alice or Bob will change their opinion about the root question—or more precisely, about what Judy will eventually come to believe about the root question—because they’ve absorbed something about the others’ intuition.

There are two qualitatively different ways that agreement can occur:

  • Convergence. Eventually, Alice will have absorbed Bob’s intuitions and vice versa. This might take a while—potentially, as long as it took Alice or Bob to originally develop their intuitions. (But it can still be exponentially smaller than the size of the tree.)

  • Mutual respect. If Alice and Bob keep disagreeing significantly, then the simple algorithm “take the average of Alice and Bob’s view” will outperform at least one of them (and often both of them). So two Bayesians can’t disagree significantly too many times, even if they totally distrust one another.

If Alice and Bob are poor Bayesians (or motivated reasoners) and continue to disagree, then Judy can easily take the matter into her own hands by deciding how to weigh Alice and Bob’s opinions. For example, Judy might decide that Alice is right most of the time and Bob is being silly by not deferring more—or Judy might decide that both of them are silly and that the midpoint between their views is even better.

The key thing that makes this work—and the reason it requires no common knowledge of rationality or other strong assumptions—is that Alice and Bob can cash out their disagreements as a prediction about what Judy will ultimately believe.

Although it introduces significant additional complications, I think this entire scheme would sometimes work better with betting, as in this proposal. Rather than trusting Alice and Bob to be reasonable Bayesians and eventually stop disagreeing significantly, Judy can instead perform an explicit arbitrage between their views. This only works if Alice and Bob both care about Judy’s view and are willing to pay to influence it.

V. Assorted details

After convergence Alice and Bob agree only approximately about each claim (such that they won’t update much from resolving the disagreement). Hopefully that lets them agree approximately about the top-level claim. If subtle disagreements about lemmas can blow up to giant disagreements about downstream claims, then this process won’t generally converge. If Alice and Bob are careful probabilistic reasoners, then a “slight” disagreement involves each of them acknowledging the plausibility of the others’ view, which seems to rule out most kinds of cascading disagreement.

This is not necessarily an effective tool for Alice to bludgeon Judy into adopting her view, it’s only helpful if Judy is actually trying to learn something. If you are trying to bludgeon people with arguments, you are probably doing it wrong. (Though gosh there are a lot of examples of this amongst the rationalists.)

By the construction of the procedure, Alice and Bob are having disagreements about what Judy will believe after examining arguments. This procedure is (at best) going to extract the logical consequences of Judy’s beliefs and standards of evidence.

Alice and Bob don’t have to operationalize claims enough that they can bet on them. But they do want to reach agreement about the meaning of each subquestion, and in particular understand what meaning Judy assigns to each subquestion. “Meaning” captures both what you infer from an answer to that subquestion, and how you answer it). If Alice and Bob don’t know how Judy uses language, then they can learn that over the course of this process, but hopefully we have more cost-effective ways to agree on the use of language (or communicate ontologies) than going through an elaborate argument procedure.

One way that Alice and Bob can get stuck is by not trusting each others’ empirical evidence. For example, Bob might explain his beliefs by saying that he’s seen evidence X, and Alice might not trust him or might believe that he is reporting evidence selectively. This procedure isn’t going to resolve that kind of disagreement. Ultimately it just punts the question to what Judy is willing to believe based on all of the available arguments.

Alice and Bob’s argument can have loops, if e.g. Alice believe X because of Y, which she believes because of X. We can unwind these loops by tagging answers explicitly with the “depth” of reasoning supporting that answer, decrementing the depth at each step, and defaulting to Judy’s intuition when the depth reaches 0. This mirrors the iterative process of intuition-formation which evolves over time, starting from t=0 when we use our initial intuitions. I think that in practice this is usually not needed in arguments, because everyone knows why Alice is trying to argue for X—if Alice is trying to prove X as a step towards proving Y, then invoking Y as a lemma for proving X looks weak.

My futurism examples differ from my economist example in that I’m starting from big questions, and breaking them down to figure out what low-level questions are important, rather than starting from a set of techniques and composing them to see what bigger-picture questions I can answer. In practice I think that both techniques are appropriate and a combination usually makes the most sense. In the context of argument in particular, I think that breaking down is a particularly valuable strategy. But even in arguments it’s still often faster go on an intuition-building digression where we consider subquestions that haven’t appeared explicitly in the argument.