Ontologies are Operating Systems

Ontologies are Operating Systems: Post-CFAR 1

[I recently came back from volunteering at a CFAR workshop. I found the whole experience to be 100% enjoyable, and I’ll be doing an actual workshop review soon. I also learned some new things and updated my mind. This is the first in a four-part series on new thoughts that I’ve gotten as a result of the workshop. If LW seems to like this one, I’ll post the rest too.]

I’ve been thinking more about the idea of how we even reason about our own thinking, our “ontology of mind”, and how our internal mental model of how our brain works.

(Roughly speaking, “ontology” means the framework you view reality through, and I’ll be using it here to refer specifically to how we view our minds.)

Before I continue, it might be helpful to ask yourself some of the below questions:

  • What is my brain like, perhaps in the form of a metaphor?

  • How do I model my thoughts?

  • What things can and can’t my brain do?

  • What does it feel like when I am thinking?

  • Do my thoughts often influence my actions?

<reminder to actually think a little before continuing>

I don’t know about you, but for me, my thoughts often feel like they float into my head. There’s a general sense of effortlessly having things stream in. If I’m especially aware (i.e. metacognitive), I can then reflect on my thoughts. But for the most part, I’m filled with thoughts about the task I’m doing.

Though I don’t often go meta, I’m aware of the fact that I’m able to. In specific situations, knowing this helps me debug my thinking processes. For example, say my internal dialogue looks like this:

“Okay, so I’ve sent to forms to Steve, and now I’ve just got to do—oh wait what about my physics test—ARGH PAIN NO—now I’ve just got to do the write-up for—wait, I just thought about physics and felt some pain. Huh… I wonder why…Move past the pain, what’s bugging me about physics? It looks like I don’t want to do it because… because I don’t think it’ll be useful?”

Because my ontology of how my thoughts operate includes the understanding that metacognition is possible, this is a “lever” I can pull on in my own mind.

I suspect that people who don’t engage in thinking about their thinking (via recursion, talking to themselves, or other things to this effect) may have a less developed internal picture of how their minds work. Things inside their head might seem to just pop in, with less explanation.

I posit that having a model of your brain that is less fleshed out affects our perception of what our brains can and can’t do.

We can imagine a hypothetical person who is self-aware and generally a fine human, except that their internal picture of their mind feels very much like a black box. They might have a sense of fatalism about some things in their mind or just feel a little confused about how their thoughts originate.

Then they come to a CFAR workshop.

What I think a lot of the CFAR rationality techniques gives these people is an upgraded internal picture of their mind with many additional levers. By “lever”, I mean a thing we can do in our brain, like metacognition or focusing (I’ll write more about levers next post). The upgraded internal picture of their mind draws attention to these levers and empowers people to have greater awareness and control in their heads by “pulling” on them.

But it’s not exactly these new levers that are the point. CFAR has mentioned that the point of teaching rationality techniques is to not only give people shiny new tools, but also improve their mindset. I agree with this view—there does seem to be something like an “optimizing mindset” that embodies rationality.

I posit that CFAR’s rationality techniques upgrade people’s ontologies of mind by changing their sense of what is possible. This, I think, is the core of an improved mindset—an increased corrigibility of mind.

Consider: Our hypothetical human goes to a rationality workshop and leaves with a lot of skills, but the general lesson is bigger than that. They’ve just seen that their thoughts can be accessed and even changed! It’s as if a huge blind spot in their thinking has been removed, and they’re now looking at entirely new classes of actions they can take!

When we talk about levers and internal models of our thinking, it’s important to remember that we’re really just talking about analogies or metaphors that exist in the mind. We don’t actually have access to our direct brain activity, so we need to make do with intermediaries that exist as concepts, which are made up of concepts, which are made up of concepts, etc etc.

Your ontology, the way that you think about how your thoughts work, is really just an abstract framework that makes it easier for “meta-you” (the part of your brain that seems like “you”) to more easily interface with your real brain.

Kind of like an operating system.

In other words, we can’t directly deal with all those neurons; our ontology, which contains thoughts, memories, internal advisors, and everything else is a conceptual interface that allows us to better manipulate information stored in our brain.

However, the operating system you acquire by interacting with CFAR-esque rationality techniques isn’t the only way type of upgraded ontology you can acquire. There exist other models which may also be just as valid. Different ontologies may draw boundaries around other mental things and empower your mind in different ways.

Leverage Research, for example, seems to be building its view of rationality from a perspective deeply grounded in introspection. I don’t know too much about them, but in a few conversations, they’ve acknowledged that their view of the mind is much more based off beliefs and internal views of things. This seems like they’d have a different sense of what is and isn’t possible.

My own personal view of rationality often views humans as merely a collection of TAPs (basically glorified if-then loops) for the most part. This ontology leads me to often think about shaping the environment, precommitment, priming/​conditioning, and other ways to modify my habit structure. Within this framework of “humans as TAPs”, I search for ways to improve.

This is contrast with another view I hold of myself as an “agenty” human that has free will in a meaningful sense. Under this ontology, I’m focusing on metacognition and executive function. Of course, this assertion of my ability to choose and pick my actions seems to be at odds with my first view of myself as a habit-stuffed zombie.

It seems plausible then, that rationality techniques which often seem at odds with one another, like the above examples, occur because they’re operating on fundamentally different assumptions of how to interface with the human mind.

In some way, it seems like I’m stating that every ontology of mind is correct. But what about mindsets that model the brain as a giant hamburger? That seems obviously wrong. My response here is to appeal to practicality. In reality, all these mental models are wrong, but some of them can be useful. No ontology accurately depicts what’s happening in our brains, but the helpful ones can allows us to think better and make better choices.

The biggest takeaway for me after realizing all this was that even my mental framework, the foundation from which I built up my understanding of instrumental rationality, is itself based on certain assumptions of my ontology. And these assumptions, though perhaps reasonable, are still just a helpful abstraction that makes it easier for me to deal with my brain.