My experience with the “rationalist uncanny valley”

Epistemic status: Very Uncertain. Originally written with the belief rationality has harmed my everyday life; revised to reflect my current belief that it’s been somewhat positive. The tone of this article, and some of the content, may constitute self-punishment behavior, signaling, or frustration rather than genuine belief.

Introduction

I’m currently a male second-year undergrad in computer science. I am not clinically depressed. My first exposure to the rationalist community was largely limited to reading HPMOR and ~40% of the original Sequences around 2013-2014; I’ve had rationalist/​EA friends continuously since then; in mid-2019 I started following LW; in March 2020 I read almost every scrap of COVID-19 content. I’m not sure how to evaluate my strength as a rationalist, but I feel epistemically slightly weaker than the average LW commenter and guess that my applied skills are average for non-rationalists of my demographic.

Others described the phenomenon of the “rationalist uncanny valley” or “valley of bad rationality” as early as 2009:

It has been observed that when someone is just starting to learn rationality, they sometimes appear to be worse off than they were before.

I’ve known about the rationalist uncanny valley since 2013, and was willing to accept some temporary loss in effectiveness. Indeed, before March 2020, the damage to my everyday life was small and masked by self-improvement. However, with isolation, my life has ground to a halt, in part due to “rationalist uncanny valley” failure modes, and failures I’m predisposed to which rationality training has exacerbated. Looking back, one of these also happened during my exposure to the community in 2014. This post is an attempt to characterize the negative effects exposure to rationality has had on my life, and is not representative of the overall effect.

Examples

1. Bad form when reading LW material

I’m very competitive and my self-worth is mostly derived from social comparison, a trait which at worst can cause me to value winning over maintaining relationships, or cause me to avoid people who have higher status than me to avoid upward comparison. In reading LW and rationalist blogs, I think I’ve turned away from useful material that takes longer for me to grasp because it makes me feel inferior. I sometimes binge on low-quality material, sometimes even seeking out highly downvoted posts; I suspect I do this because it allows me to mentally jeer at people or ideas I know are incorrect. More commonly, I’ll seek out material that is already familiar to me. Worse, it’s possible that all this reading has confirmed beliefs I was already predisposed to, and therefore been net-negative.

As a concrete example, Nate Soares has a post on the “dubious virtue” of desperation. It’s dubious because it must be applied carefully: one must be desperate enough to go all-out in pursuit of a goal, but not burn out or signal visible desperation towards people.

I am already strong at the positive aspects of desperation, but the idea of “dubious virtues” is appealing to me (maybe it’s the idea that I can outdo others by mastering a powerful technique often considered too dangerous). I read the article several times, largely disregarding the warnings because they made me feel uncomfortable, with the result that I burned out and signaled desperation towards people.

Something similar but more severe happened in 2013-14, when I fell into the following pattern (not quite literally true): A friend links me a LW article. Then my defense mechanisms of epistemic learned helplessness activate and I stop reading. (didn’t they make the basilisk thing? I should read all about that so I can identify suspect arguments) Then I decide I should prove my defense mechanisms wrong by reading a quarter of HPMOR in one night and memorizing the Rationalist Virtues! Then I completely stop reading out of fear that rationality is a cult/​mind-hacking attempt. I decide to wait several years to dampen the cycle before becoming a rationalist. It’s possible I spent six years in the rationalist uncanny valley and I’m not sure there was a simple way out before approximately last year.

2a. Predictions and being a Straw Vulcan

Others have gone through a phase of making all decisions by System 2 because they no longer trust System 1. I’m somewhat related. Over the last few months, I’ve worked on making calibrated predictions, including predicting my own future to inform career planning decisions. Perhaps due to the way I approach this exercise, I feel much less in touch with my emotions, and all predictions feel fuzzier. (It’s also possible that my emotions are just unstable or suppressed due to the circumstances.) My feelings about the world vary with my mood, but now I try to correct for this and feel uncertain enough that I defer to a reference class or other people. Since I don’t get to actually check on my own feelings, this is bad for practice.

2b. Predictions reify pessimism

Calibrating myself might be a good thing to do in ordinary times, but isolation has made me mildly depressed, causing reduced willpower. Consider a commitment I made recently to study with an peer over Zoom. In ordinary times, there’s a 90% chance I keep this commitment. Taking into account my reduced agency, I predict an 80% chance I would do something that I normally do at 90%. However, there’s a 65% chance that I actually do something I predict at 80%, so I continue until the fixed point, which is about 25%, which turns out to be accurate. Sometimes this fixed point is 0%.

In the past, I would mentally commit to actions I think I want to take (e.g. meditate regularly) then not actually follow through. Since I have realized how often this happens, I now make very few commitments. This has technically made me much more trustworthy, but the number of commitments I keep (to myself and others) has decreased in absolute terms.

3. Not Actually Trying

Reading feels much better than trying, especially when it requires willpower and time and the outcome is uncertain.

I think this was out of reach in 2014 even if I had developed enough trust in LW to self-modify based on LW1 principles—EY notes that the biggest mistake with the original Sequences was neglecting applications and practice opportunities.

4. Disorientation and miscellaneous disruptions

Anna Salamon says that a particular type of disorientation can result when a new rationalist discards common sense, and manifests as an inability to do the “ordinary” thing when it is correct. After LW discovered that the efficient market hypothesis is sometimes false relative to strong predictors, I updated strongly towards rationalist exceptionalism in general, which may be correct, but this also increased my disorientation. Some examples I can identify:

  • I tried to convince my friend who’s a good fit for climate policy to shift to AI policy.

  • I noticed that I sometimes need to rationalize my curiosity about the world as something useful.

  • Rationalists beginning to see non-rationalists as normies, NPCs, or otherwise sub-human: I now find talking to non-rationalists much less interesting.

  • I often sink into a debate mindset that any proposition can be true if I make the right argument for it, which I previously only liked to enter while playing devil’s-advocate. When arguing for a point, it’s slightly more common for me to be unsure whether I actually believe it than before. I have no idea what’s going on here since I’m not much better at rhetoric than before. Is my unconscious rebelling against efforts to stop motivated reasoning? Am I trying to play status games? Should I have resolved my unwillingness to apologize?

  • Several counterproductive, intrusive thoughts that haven’t gone away for several months of discussions with friends and occasional therapy:

    • My self-worth is derived from my absolute impact on the world—sometimes causes a vicious cycle where I feel worthless, make plans that take that into account, and feel more worthless.

    • I’m a bad person if I’m a dedicated EA but don’t viscerally feel the impact I have.

    • I’m a bad person if I eat meat (despite that vegetarianism is infeasible in the short term due to circumstances, and is a long-term goal for me)

    • After thinking about morality for a while, I’m 35% nihilist. This is supposed to not have an effect on my actions—nihilism can just be subtracted out—but everything feels approximately 35% hollow.

Conclusion

While I derived benefits from the content, I think it’s plausible that COVID-19 was otherwise a bad time to dive headfirst into rationality. If I am to make guidelines for people exactly like me, they would be:

  • Engage with material that interests you, but recognize discomfort and unhealthy reading patterns.

  • Consume material when you can actually practice it (e.g. mentally stable, some minimum amount of slack).

  • Practice it (still have to figure out how).