How It Feels to Improve My Rationality

Note: this has started as a comment reply, but I thought it got interesting (and long) enough to deserve its own post.

Important note: this post is likely to spark some extreme reactions, because of how human brains are built. I’m including warnings, so please read this post carefully and in order written or don’t read it at all.

I’m going to attempt to describe my subjective experience of progress in rationality.

Important edit: I learned from the responses to this post that there’s a group of people which whom this resonates pretty well, and there’s also a substantial group with whom it does not at all resonate, to the degree they don’t know if what I’m saying even makes sense and is correlated to rationality in any meaningful way. If you find yourself in the second group, please notice that trying to verify if I’m doing “real rationality” or not is not a way to resolve your doubts. There is no reason why you would need to feel the same. It’s OK to have different experiences. How you experience things is not a test of your rationality. It’s also not a test of my rationality. All in all, because of publishing this and reading the comments, I’ve found out some interesting stuff about how some clusters of people tend to think about this :)

Also, I need to mention that I am not an advanced rationalist, and my rationality background is mostly reading Eliezer’s sequences and self-experimentation.

I’m still going to give this a shot, because I think it’s going to be a useful reference for a certain level in rationality progress.

I even expect myself to find all that I write here silly and stupid some time later.

But that’s the whole point, isn’t it?

What I can say about how rationality feels to me now, is going to be pretty irrelevant pretty soon.

I also expect a significant part of readers to be outraged by it, one way or the other.

If you think this is has no value, maybe try to imagine a rationality-beginner version of you that would find a description such as this useful. If only as a reference that says, yes, there is a difference. No, rationality does not feel like a lot of abstract knowledge that you remember from a book. Yes, it does change you deeply, probably deeper than you suspect.

In case you want to downvote this, please do me a favour and write a private message to me, suggesting how I could change this so that it stops offending you.

Please stop any feeling of wanting to compare yourself to me or anyone else, or to prove anyone’s superiority or inferiority.

If you can’t do this please bookmark this post and return to it some other time.

...

...

Ready?

So, here we go. If you are free from againstness and competitiveness, please be welcome to read on, and feel free to tell me how this resonates, and how different it feels inside your own head and on your own level.


Part 1. Pastures and fences

Let’s imagine a vast landscape, full of vibrant greenery of various sorts.

Now, my visualization of object-level rationality is staking out territories, like small parcels of a pasture surrounded by fences.

Inside of the fences, I tend to gave more of neat grass than anything else. It’s never perfect, but when I keep working on an area, it’s slowly improving. If neglected, weeds will start growing back sooner or later.

Let’s also imagine that the ideas and concepts I generalize as I go about my work become seeds of grass, carried by the wind.

What the work feels like, is that I’m running back and forth between object level (my pastures) and meta-level (scattering seeds).

As result of this running back and forth I’m able to stake new territories, or improve previous ones, to have better coverage and less weeds.

The progress I make in my pastures feeds back into interesting meta-level insights (more seeds carried by the wind), which in turn tend to spread to new areas even when I’m not helping with this process on purpose.

My pastures tend to concentrate in clusters, in areas that I have worked on the most.

When I have lots of action in one area, the large amounts of seeds generated (meta techniques) are more often carried to other places, and at those times I experience the most change happening in other, especially new and unexplored, areas.

However even if I can reuse some of my meta-ideas (seeds), then still to have a nice and clear territory I need to go over there, and put in the manual work of clearing it up.

As I’m getting better and more efficient at this, it becomes less work to gain new territories and improve old ones.

But there’s always some amount of manual labor involved.


Part 2. Tells of epistemic high ground

Disclaimer: not using this for the Dark Side requires a considerable amount of self-honesty. I’m only posting this because I believe most of you folks reading this are advanced enough not to shoot yourself in the foot by e.g. using this in arguments.

Note: If you feel the slightest urge to flaunt your rationality level, pause and catch it. (You are welcome.) Please do not start any discussion motivated by this.

So, what clues do I tend to notice when my rationality level is going up, relative to other people?

Important note: This is not the same as “how do I notice if I’m mistaken” or “how do I know if I’m on the right path”. These are things I notice after the fact, that I judge to be correlates, but they are not to be used to choose direction in learning or sorting out beliefs. I wrote the list below exactly because it is the less talked about part, and it’s fun to notice things. Somehow everyone seems to have thought this is more than I meant it to be.

Edit: check Viliam’s comment for some concrete examples that make this list better.

In a particular field:

  • My language becomes more precise. Where others use one word, I now use two, or six.

  • I see more confusion all around.

  • Polarization in my evaluations increases. E.g. two sensible sounding ideas become one great idea and one stupid idea.

  • I start getting strong impulses that tell me to educate people who I now see are clearly confused, and could be saved from their mistake in one minute if I could tell them what I know… (spoiler alert, this doesn’t work).

Rationality level in general:

  • I stop having problems in my life that seem to be common all around, and that I used to have in the past.

  • I forget how it is to have certain problems, and I need to remind myself constantly that what seems easy to me is not easy for everyone.

  • Writings of other people move forward on the path from intimidating to insightful to sensible to confused to pitiful.

  • I start to intuitively discriminate between rationality levels of more people above me.

  • Intuitively judging someone’s level requires less and less data, from reading a book to reading ten articles to reading one article.

Important note: although I am aware that my mind automatically estimates rationality levels of various people, I very strongly discourage anyone (including myself) from ever publishing such scores/​lists/​rankings. If you ever have an urge to do this, especially in public, think twice, and then think again, and then shut up. The same applies to ever telling your estimates to the people in question.

Note: Growth mindset!


Now let’s briefly return to the post I started out replying to. Gram_Stone suggested that:

You might say that one possible statement of the problem of human rationality is obtaining a complete understanding of the algorithm implicit in the physical structure of our brains that allows us to generate such new and improved rules.

Now after everything I’ve seen until now, my intuition suggests Gram_Stone’s idealized method wouldn’t work from inside a human brain.

A generalized meta-technique could become one of the many seeds that help me in my work, or even a very important one that would spread very widely, but it still wouldn’t magically turn raw territory into perfect grassland.


Part 3. OK or Cancel?

The closest I’ve come to Gram_Stone’s ideal is when I witnessed a whole cycle of improving in a certain area being executed subconsciously.

It was only brought to my full attention when an already polished solution in verbal form popped into my head when I was taking a shower.

It felt like a popup on a computer screen that had “Cancel” and “OK” buttons, and after I chose OK the rest continued automatically.

After this single short moment, I found a subconscious habit was already in place that ensured changing my previous thought patterns, and it proved to work reliably long after.


That’s it! I hope I’ve left you better off reading this, than not reading this.

Meta-note about my writing agenda: I’ve developed a few useful (I hope) and unique techniques and ideas for applied rationality, which I don’t (yet) know how to share with the community. To get that chunk of data birthed out of me, I need some continued engagement from readers who would give me feedback and generally show interest (this needs to be done slowly and in the right order, so I would have trouble persisting otherwise). So for now I’m writing separate posts noncommittally, to test reactions and (hopefully) gather some folks that could support me in the process of communicating my more developed ideas.