I can see how I am Dumb

When I’m talking to somebody, sometimes I lose the conversational thread. Or sometimes I feel like there is this thing I want to say that seems relevant now, but I just can’t remember it.

Or maybe I’m trying to solve a particular problem. I throw myself again and again at the problem, but it just won’t budge. And then after some long amount of time, possibly hours, I realize that the solution was extremely simple. And I just failed to execute the right kind of solution-finding algorithm that would have found this very simple solution quickly.

I would expect that people with more intelligence, perform better in these domains. They have probably an easier time remembering and retaining the right things. Well, that alone might be sufficient to explain a large chunk of what makes a more intelligent person able to perform better.

If you remember the right things quickly that are relevant in the moment, and if you can keep track of more things in your head at the same time without losing track of what these things were, then that might account for a large chunk of how an intelligent person is better at performing any particular thing.


The core point here is that I think everybody, even somebody who would be much smarter than me, can see various failure modes in their own cognition and realize that they might be just so fundamental that there is no direct way of changing them.

I’m pretty sure that at some level what sorts of things your brain spits out into your consciousness and how useful that information is in the given situation, is something that you can’t fundamentally change. I expect this to be a hard-coded algorithm, and I expect there to be many such hard-coded cognitive processes that can’t be changed (at least not in major ways).

The cognitive improvements that you can apply will be at a higher level. To me, it seems that is what much of the Sequences are about. You can understand that there is something like the sunk cost fallacy, and understanding what it is, allows you to train yourself to recognize when you fall pray to it (Though that is a separate step from understanding what it is that you actually need to do to get most of the benefit).

And the way you would do this is by for example using TAPs. In a sense, it seems that tabs are a way to install a very small hook into your brain in the programming sense. My current model of it is that you install a little watcher program that watches your sensory input streams and your internal model of the world. And then, when it detects a specific pattern, it triggers the execution of another algorithm. The interesting thing is that if you do this well, then all of this will become subconscious. So it’s not that you can’t change your subconscious algorithms. TABs are a way to install tiny new subconsciously executed algorithms into your brain.

So let me make give an example of an algorithm that I think is largely unchangeable. Let’s do an experiment. Please follow the bolded instructions:

Imagine a car in your mind’s eye.

Now, your brain will have brought probably some specific car to mind. Maybe it’s a 911 Porsche, the Cybertruck, or another Tesla. The point is that based on you reading the word car your brain pulled a lot more information out of the depths of your brain, such as an image or a “feeling of carness”. How is your mind doing this? You read the word car and your mind produces some qualia that is associated with the concept of car.

Now think of a car part.

Now what did you imagine? A steering wheel, a door, an engine, a cylinder, a wheel, a windshield, an antenna? Notice that there was one thing that came to mind first and then maybe another thing. But how did your brain generate that specific thing? Why did you for example think of a tire instead of a door? When I’m doing this experiment I do not have any introspective access to what is going on on the low level. How does this retrieval algorithm work that retrieves some information stored in the brain that’s associated with being a car part?

The general version of this exercise goes as follows.

1) Imagine anything. 2) Then imagine something related to that thing. 3) Repeat from step 2.

The opaqueness of this algorithm makes me think that you can’t change it, at least not in a major, direct way. You can of course change the input to the algorithm, you can learn new things and think in specific ways to see new connections between things, and then in the future, the semantic lookup algorithm will use that newly created knowledge and you might be better at thinking in particular contexts but it’s not like fundamentally this retrieval algorithm was changed.

I’m very unsure how different this specific algorithm is for somebody who would be much smarter. Is there actually some important difference between their and my retrieval algorithm? Possibly not.

But I feel that probably whatever makes somebody smarter than me I expect this to be to largely depend on various “low-level hardware” configurations that can’t be changed. I think there are people that are better at not losing the threat in complicated technical discussions compared to me. And here it’s obvious that this is something that is not very much improvable because it doesn’t really depend on anything that you have learned. It is not like if I acquire some piece of knowledge I would become better at not losing a conversational thread. Of course, there are some techniques you might apply at a higher cognitive level that would allow you to better keep track of what you are talking about. But I’m imagining here that there are two people who use the same kinds of techniques, which by default would be no technique at all to manage a conversation, And the only difference is the intelligence of these two people.

To be clear, here I mean something very specific with intelligence. The kind of intelligence that doesn’t depend on what you know. I think there could be somebody in this sense, who is much smarter but behaves much more irrationally. They might do so because they lack various pieces of knowledge, including knowledge of algorithms that can be used to evaluate and generate new knowledge.

Monkeys and Bananas

I’m not quite sure why I have been writing this article. I just started doing it and then didn’t stop. I think I have been thinking about this issue in the past because I have not met somebody who was obviously smarter than me until I was 25. And then it was sort of a shock that there are other people who could be smarter than me.

One piece of advice I can give for handling this realization well is to recognize that even if you are not playing with the best character stats that you have, you can still play the game. And you can play it well.

Just because there is somebody who is smarter than you, who works on some specific topic, doesn’t mean that you shouldn’t work on it. You should work on the thing where you can make the largest positive difference.

Just imagine a civilization of monkeys. It’s extremely important to these monkeys that they breed lots of different varieties of bananas such that they will never get tired of eating bananas. An average monkey researcher in the field of banana breeding can create a new kind of banana in 10 years with an average taste score of 5.

Now, there are some monkey researchers who are not only faster at creating new types of bananas but they also on average create better-tasting bananas. Now imagine that you are literally the worst monkey researcher that exists. On average you will create less tasty bananas and will take longer than the slowest monkey researcher that already is working on creating new bananas.

Does this mean you shouldn’t become a banana breeder? Well, if your civilization is in dire turmoil because your monkey brethren constantly get tired of eating bananas because there are just not enough varieties then it might be very obvious that this is the most impactful use of your time. You just need to make sure that your counterfactual impact is actually positive. Doing unethical experiments, trying to figure out how to turn monkeys into extremely tasty bananas might actually be worse than you doing nothing if you fundamentally care about all monkeys being happy.