Seeking Truth Too Hard Can Keep You from Winning

LW-style rationality is about winning. Instrumentally, most rationalists think they can win, in part, by seeking truth. I frequently run into comments here where folks take truth as an effectively “sacred” concern in the sense that truth matters to them above all else. In this post, I hope to convince you that, because of the problem of the criterion, seeking truth too hard and holding it sacred work against learning the truth because doing so makes you insufficiently skilled at reasoning about non-truth-seeking agents.

Let’s settle a couple things first.

What is truth? Rather than get into philosophical debates about this one, let’s use a reasonable working definition that by truth we mean “accurate predictions about our experiences”. This sort of truth is nice because it makes minimal metaphysical claims (e.g. no need to suppose something like an external reality required by a correspondence theory of truth) and it’s compatible with Bayesianism (you strive to believe things that match your observations). Also, it seems to be the thing we actually care about when we seek truth: we want to know things that tell us what we’ll find as we experience the world.

What’s the problem of the criterion? Really, you still don’t know? First time here? In short, the problem of the criterion is the problem that to know something means you know how to know it, but knowing how to know something is knowing something, so it creates an infinite loop (we call this loop epistemic circularity). The only way to break the loop is to ground knowledge (information you believe predicts your experiences, i.e. stuff you think is true) in something other than more knowledge. That something is purpose.

So, what’s the problem with prioritizing truth? On first glance, nothing. Predicting your experiences accurately is quite useful for getting more of the kinds of experiences you want, which is to say, winning. The problems arise when you over optimize for truth.

The trouble is that not all humans, let alone all agent-like things in the universe, are rational or truth seeking (they have other purposes that ground their thinking). This means that you’re going to need some skill at reasoning about non-truth-seeking agents. But the human brain is kinda bad at thinking about minds not like our own. A highly effective way to overcome this is to build cognitive empathy for others by learning to think like them (not just to model them from the outside, but to run a simulated thought process as if you were them). But this requires an ability to prioritize something other than truth because the agent being simulated doesn’t and because our brains can’t actually firewall off these simulations cleanly from “our” “real” thoughts (cf. worries about dark arts, which we’ll discuss shortly). Thus in order to accurately model non-truth-seeking agents, we need some ability to care about stuff other than seeking truth.

The classic failure mode of failing to accurately model non-truth-seeking agents, which I think many of us are familiar with, is the overly scrupulous, socially awkward rationalist or nerd who is very good at deliberate thinking and can reckon all sorts of plans that should work in theory to get them what they want, but which fall apart when they have to interact with other humans. Consequences: difficulties in dating and relationships, trouble convincing others about the biggest problems in the world, being incentivized to only closely associate with other overly scrupulous and socially awkward rationalists, etc.

This trap where seeking truth locks one out from the truth when attempting to model non-truth-seeking agents is pernicious because the only way to address it is to ease up on doing the thing you’re trying to do: seek truth.

Please don’t round off that last sentence to “give up truth seeking”! That’s definitely not what I’m saying! What I’m saying is that trying too tightly to optimize for the truth Goodharts yourself on truth. Why? Two reasons. First, there’s a gap where the objective of truth can’t be better optimized for past some point because the problem of the criterion creates hard limits on truth seeking given the ungrounded foundation of our knowledge and beliefs. Second, you can’t model all agents if you simulate them using your truth-seeking mind. So the only option (lacking future tech that would let us change how our brains work, anyway) is to ease off a bit and let in other concerns than truth.

But isn’t this a dark art to be avoided? Maybe? The reality is that you are not yourself actually a truth-seeking-agent, no matter how much you want it to be so. Humans are not designed to optimize for truth, but are very good at deceiving themselves into thinking they are (or deceiving themselves of any number of things). We instead care about lots of things, like eating food, breathing, physical safety, and, yes, truth. But we can’t optimize for any one of those things to the total exclusion of the others, because once we’re doing our current best at winning our desires we can only trade off along the optimization curve. Past some point, trying to get more truth will only make each of us worse off overall, not better, even if we did each succeed in getting more truth, which I don’t think we can get anyway due to Goodhart effects. It’s not a dark art to accept we’re human rather than idealized Bayesian agents with hyperpriors for the truth; it’s just facing the world as we find it.

And to bring this around to my favorite topic, this is why the problem of the criterion matters: that you are not a perfectly truth concerned agent means you are grounding your knowledge in things other than truth, and the sooner you figure that out the sooner you can get on with more winning.

Thanks to Justis for useful feedback on an earlier draft of this essay via the LW feedback service.