On Trust

“Trust”, as the word is typically used, is a… weird concept, to me. Like, it’s trying to carve the world in a way which I don’t naturally carve it myself. This post is my attempt to convey what I find weird about “trust”, and what adjacent concepts I personally find more natural instead.

The Weirdness Of “Trust”

Here are some example phrases which make “trust” feel like a weird/​unnatural concept to me:

  • “I decided to trust her”

  • “Should I trust him?”

  • “Trust me”

  • “They offered me their trust”

To me, the phrase “I decided to trust her” throws an error. It’s the “decided” part that’s the problem: beliefs are not supposed to involve any “deciding”. There’s priors, there’s evidence, and if it feels like there’s a degree of freedom in what to do with those, then something has probably gone wrong. (The main exception here is self-fulfilling prophecy, but that’s not obviously centrally involved in whatever “I decided to trust her” means.)

Similarly with “trust me”. Like, wat? If I were to change my belief about some arbitrary thing, just because somebody asked me to change my belief about that thing, that would probably mean that something had gone wrong.

“Should I trust him?” is a less central example, but… “should” sounds like it has a moral/​utility element here. I could maybe interpret the phrase in a purely epistemic way—e.g. “should I trust him?” → “will I end up believing true things if I trust him?”—but also that interpretation seems like it’s missing something about how the phrase is actually used in practice? Anyway, a moral/​utility element entering epistemic matters throws an error.

The thing which is natural to me is: when someone makes a claim, or gives me information, I intuitively think “what process led to them making this claim or giving me this information, and does that process systematically make the claim/​information match the territory?”. If Alice claims that moderate doses of hydroxyhopytheticol prevent pancreatic cancer, then I automatically generate hypotheses for what caused Alice to make that claim. Sometimes the answer is “Alice read it in the news, and the reporter probably got it by misinterpreting/​not-very-carefully-reporting a paper which itself was some combination of underpowered, observational, or in vitro/​in silico/​in a model organism”, and then I basically ignore the claim. Other times the answer is “Alice is one of those friends who’s into reviewing the methodology and stats of papers”, and then I expect the claim is backed by surprisingly strong evidence.

Note that this is a purely epistemic question—simplifying somewhat, I’m asking things like “Do I in fact think this information is true? Do I in fact think that Alice believes it (or alieves it, or wants-to-believe it, etc)?”. There’s no deciding whether I believe the person. Whether I “should” trust them seems like an unnecessary level of meta-reasoning. I’m just probing my own beliefs: not “what should I believe here”, but simply “what do I in fact believe here”. As a loose general heuristic, if questions of belief involve “deciding” things or answering “should” questions, then a mistake has probably been made. The rules of Bayes inference (or logical uncertainty, etc) do not typically involve “deciding” or “shouldness”; those enter at the utility stage, not the epistemic stage.

What’s This “Trust” Thing?

Is there some natural thing which lines up with the concept of “trust”, as it’s typically used? Some toy model which would explain why “deciding to trust someone” or “asking for trust” or “offering trust” make sense, epistemically? Here’s my current best guess.

Core mechanism: when you “decide to trust Alice”, you believe Alice’ claims but, crucially, if you later find that Alice’ claims were false then you’ll be pissed off and probably want to punish Alice somehow. In that case she’s “breached trust”—a wording which clearly evokes breach of contract.

“Trust”, in other words, is supposed to work like a contract: Alice commits to tell you true things, you commit to believe her. Implicitly, you’ll punish Alice (somehow, with some probability) if her claim turns out to be false. This gives Alice an incentive to make true claims, and you therefore assign higher credence to Alice’ claims.

Trust-as-contract matches up nicely with standard phrasings like “offer/​accept trust”, “breach of trust”, and our examples above like “trust me”.

Epistemically, this trust-contract makes sense from a “buyer’s perspective” (i.e. you, choosing to trust Alice) insofar as you expect Alice to make true claims if-and-only-if operating under the trust-contract. And that’s where I usually get off the train.

Why I Don’t Trust “Trust”

It is very rare that I expect someone to make true claims if-and-only-if operating under some kind of implicit “trust contract”.

For starters, people largely just have pretty crap epistemics, at least when it comes to stuff where there’s any significant uncertainty about the truth of claims in the first place. (Of course for the vast majority of day-to-day information, there isn’t much uncertainty - the sky is blue, 2*2 = 4, I’m wearing black socks, etc.) My mother makes a lot of claims based on what she read in the news, and I automatically discard most such claims as crap. I “don’t trust my mother” on such matters, not because I think she’ll “betray my trust” (i.e. intentionally breach contract), but because I expect that she is simply incapable of reliably keeping up her end of such a trust-contract in the first place. She would very likely breach by accident without even realizing it.

Second, this whole trust-contract thing tends not to be explicitly laid out, so people often don’t expect punishment if their claims are false. Like, if I just say “would you like to bet on that claim being true?” or better yet “would you like to insure against that claim turning out to be false?”, I expect claimants to suddenly be much less confident of their claims, typically. (Though to some extent that’s because operationalizations for such agreements are tricky.) And even when it is clear that blatant falsehoods will induce punishment, it’s still a lot of work to demonstrate that a claim is false, and a lot of work to punish someone.

Third problem: the trust-contract sets up some terrible incentives going forward. If you trust Alice on some claim, and then Alice finds out her claim was false, she’s now incentivized to hide that information, or to avoid updating herself so that she can credibly claim that she thinks she was telling the truth.

What I Do Instead

As mentioned earlier, when I hear a claim, I tend to automatically hypothesize what might have caused the claimant to make that claim. Where did they get it from? Why that claim, rather than some other? Usually, thinking about that claim-producing-process is all I need to decide how much to believe a claim. It’s just another special case of the standard core rationalist’s question: “what do you believe, and what caused you to believe it?”.

… and then that’s it, there mostly just isn’t much additional need for “trust”.