Don’t Trust Your Brain

…if it’s anything like mine[1]. If it isn’t, then maybe treat this as an opportunity to fight the typical mind fallacy by getting a glimpse at what’s going on in other minds.


I’ve made many predictions and have gone through a lot of calibration training over the years, and have generally become pretty well calibrated in several domains. Yet I noticed that in some areas, I seem surprisingly immune to calibration. Where my brain just doesn’t want to learn that it’s systematically and repeatedly wrong about something. This post is about the three four such areas.

(1) Future Ability to Remember Things

Sometimes, I feel incredibly confident that some piece of information will be easy for me to recall in the future, even when this is entirely false. Some examples:

  • When I meet a new person, and they mention their name, then while hearing it, I typically feel like “yeah, easy, it’s trivial for me to repeat that name in my head so I will remember it forever, no problem”, and then 5 seconds later it’s gone

  • When I open some food/​drink product and then put it into the fridge, I usually take a note on the product about when I opened it, as this helps me make a judgment in the future about whether it should still be good to consume. Sometimes, when I consider taking such a note, I come up with some reason why, for this particular thing in this particular situation, it will be trivial for future-me to remember when I opened it. These reasons often feel extremely convincing, but in the majority of cases, they turn out to be wrong.

  • While taking notes on my phone to remind me of something in the future, I often overestimate whether I’ll still know what I meant with that note when I read it a few days or weeks later. For instance, when taking one note recently, my phone autocorrected one word to something else, which was pretty funny. I decided to leave it like that, assuming that future-me would also find it funny. This part was true—but future-me also really struggled to then figure out what the note was trying to actually tell me.

Even knowing these patterns, it’s often surprisingly difficult to override that feeling of utter conviction that “this case is different”, assuming that this time I’ll really remember that thing easily.

(2) Local Optima of Comfort

When I’m in an unusually comfortable situation but need to leave it soon, I often feel a deep, visceral sense of this being in some sense unbearable. Like the current comfort being so much better than whatever awaits afterwards, that it takes a huge amount of activation energy to get moving. And almost every time, within seconds after finally leaving that situation, I realize it’s far less bad than I imagined.

Two examples:

  • Leaving a hot shower. Or more precisely, turning a hot shower cold. I tend to end all my showers with about a minute of progressively colder water. Interestingly, reducing the temperature from hot to warm is the hardest step and occasionally takes me a minute of “fighting myself” to do it. Going from lukewarm to freezing cold is, for some reason, much easier.

  • When I’m woken up by an alarm in the morning, I’m very often—even after 8 or 9 hours of sleep—convinced that I need more sleep and that getting up right then would be a huge mistake and would lead to a terrible day, even though I know that this has practically never been the case. Usually, within 5-10 minutes of getting up, I’ll feel alert and fine. Yet on many mornings, I have to fight the same immense conviction that today is different, and this time I really should sleep an extra hour.

Somewhat related may be my experience of procrastination and overestimating how unpleasant it would be to engage with certain aversive tasks. This almost always ends up being a huge overestimate, and the thing I procrastinated for ages turns out to be basically fine. My System 1 is pretty slow to update on this, though.

(3) Interpersonal Conflict

I’m generally quite conflict-avoidant. But not everyone is—some people are pretty blunt, and when something I did or said seems objectionable to them, they don’t pull punches. I suppose that becoming irrational in the face of blame is not too unusual, and it’s easy to imagine the evolutionary benefits of such an adaptation. Still, it’s interesting to observe how, when under serious attack, my brain becomes particularly convinced that I must be entirely innocent and that this attack on me is outrageously unjust.

After reading Solve for Happy by Mo Gawdat, I didn’t take that much away from the book—but one thing that did stick with me is the simple advice of asking yourself “is this true?” when you’re reflecting on some narrative that you’re constructing in your head during a conflict. Not “does this feel true?”—it basically always does—but whether my internal narrative is actually a decent representation of reality, and quite often it isn’t, and even just asking this question then makes it much easier to step out of this one-sided framing.

(4) Bonus: Recognizing I’m in a Dream

One of the most common things to happen to me in dreams is that I realize that some particular situation is very strange. I then tend to think something like “Wow, typically, this kind of thing only happens to me in dreams. It’s very interesting that this time it’s happening for real”. I’ve had this train of thought hundreds of times over the years. Sometimes I think this while awake, and then immediately do a reality check. In a dream, however, I’m so gullible that I almost never make this jump to ask myself, “wait a second, is this a dream?”—I just briefly think how curious it is that I’m now experiencing a dream-like weirdness for real, and don’t act on it at all, because I’m so convinced that I’m awake that I don’t even bother to check. Eventually, I wake up and facepalm.

Takeaway

Through calibration training, I learned that I can train my System 1 to make pretty accurate predictions by refining the translation of my inner feeling of conviction into probabilities. The areas mentioned in this post are deviations from that—contexts where, even with considerable effort, I haven’t yet managed to entirely overcome the systematically wrong predictions/​assessments that my intuition keeps throwing at me. Subjectively, they often feel so overwhelmingly true that it’s difficult to resist the urge to just believe my intuition. Having this meta-awareness about it certainly helps somewhat. Indeed, as in the “Is this true?” example, sometimes it’s mostly about asking the right question in the right moment. Like learning to respond to my brain’s claim that “clearly, I’ll easily remember this thing in the future” with “wait, will I?” instead of just going along with it—at which point I typically know that I should be suspicious.

My impression is that the more I think about this and discuss it with others, the better I get at recognizing such situations when they happen, even if progress on this is annoyingly slow. So, uhh, perhaps reading about it is similarly helpful.

  1. ^

    I made a very informal survey among some friends, and it seemed like ~80% of them could relate to what I describe at least to some degree.