Ontologies are Operating Systems
Ontologies are Operating Systems: Post-CFAR 1
[I recently came back from volunteering at a CFAR workshop. I found the whole experience to be 100% enjoyable, and I’ll be doing an actual workshop review soon. I also learned some new things and updated my mind. This is the first in a four-part series on new thoughts that I’ve gotten as a result of the workshop. If LW seems to like this one, I’ll post the rest too.]
I’ve been thinking more about the idea of how we even reason about our own thinking, our “ontology of mind”, and how our internal mental model of how our brain works.
(Roughly speaking, “ontology” means the framework you view reality through, and I’ll be using it here to refer specifically to how we view our minds.)
Before I continue, it might be helpful to ask yourself some of the below questions:
-
What is my brain like, perhaps in the form of a metaphor?
-
How do I model my thoughts?
-
What things can and can’t my brain do?
-
What does it feel like when I am thinking?
-
Do my thoughts often influence my actions?
<reminder to actually think a little before continuing>
I don’t know about you, but for me, my thoughts often feel like they float into my head. There’s a general sense of effortlessly having things stream in. If I’m especially aware (i.e. metacognitive), I can then reflect on my thoughts. But for the most part, I’m filled with thoughts about the task I’m doing.
Though I don’t often go meta, I’m aware of the fact that I’m able to. In specific situations, knowing this helps me debug my thinking processes. For example, say my internal dialogue looks like this:
“Okay, so I’ve sent to forms to Steve, and now I’ve just got to do—oh wait what about my physics test—ARGH PAIN NO—now I’ve just got to do the write-up for—wait, I just thought about physics and felt some pain. Huh… I wonder why…Move past the pain, what’s bugging me about physics? It looks like I don’t want to do it because… because I don’t think it’ll be useful?”
Because my ontology of how my thoughts operate includes the understanding that metacognition is possible, this is a “lever” I can pull on in my own mind.
I suspect that people who don’t engage in thinking about their thinking (via recursion, talking to themselves, or other things to this effect) may have a less developed internal picture of how their minds work. Things inside their head might seem to just pop in, with less explanation.
I posit that having a model of your brain that is less fleshed out affects our perception of what our brains can and can’t do.
We can imagine a hypothetical person who is self-aware and generally a fine human, except that their internal picture of their mind feels very much like a black box. They might have a sense of fatalism about some things in their mind or just feel a little confused about how their thoughts originate.
Then they come to a CFAR workshop.
What I think a lot of the CFAR rationality techniques gives these people is an upgraded internal picture of their mind with many additional levers. By “lever”, I mean a thing we can do in our brain, like metacognition or focusing (I’ll write more about levers next post). The upgraded internal picture of their mind draws attention to these levers and empowers people to have greater awareness and control in their heads by “pulling” on them.
But it’s not exactly these new levers that are the point. CFAR has mentioned that the point of teaching rationality techniques is to not only give people shiny new tools, but also improve their mindset. I agree with this view—there does seem to be something like an “optimizing mindset” that embodies rationality.
I posit that CFAR’s rationality techniques upgrade people’s ontologies of mind by changing their sense of what is possible. This, I think, is the core of an improved mindset—an increased corrigibility of mind.
Consider: Our hypothetical human goes to a rationality workshop and leaves with a lot of skills, but the general lesson is bigger than that. They’ve just seen that their thoughts can be accessed and even changed! It’s as if a huge blind spot in their thinking has been removed, and they’re now looking at entirely new classes of actions they can take!
When we talk about levers and internal models of our thinking, it’s important to remember that we’re really just talking about analogies or metaphors that exist in the mind. We don’t actually have access to our direct brain activity, so we need to make do with intermediaries that exist as concepts, which are made up of concepts, which are made up of concepts, etc etc.
Your ontology, the way that you think about how your thoughts work, is really just an abstract framework that makes it easier for “meta-you” (the part of your brain that seems like “you”) to more easily interface with your real brain.
Kind of like an operating system.
In other words, we can’t directly deal with all those neurons; our ontology, which contains thoughts, memories, internal advisors, and everything else is a conceptual interface that allows us to better manipulate information stored in our brain.
However, the operating system you acquire by interacting with CFAR-esque rationality techniques isn’t the only way type of upgraded ontology you can acquire. There exist other models which may also be just as valid. Different ontologies may draw boundaries around other mental things and empower your mind in different ways.
Leverage Research, for example, seems to be building its view of rationality from a perspective deeply grounded in introspection. I don’t know too much about them, but in a few conversations, they’ve acknowledged that their view of the mind is much more based off beliefs and internal views of things. This seems like they’d have a different sense of what is and isn’t possible.
My own personal view of rationality often views humans as merely a collection of TAPs (basically glorified if-then loops) for the most part. This ontology leads me to often think about shaping the environment, precommitment, priming/conditioning, and other ways to modify my habit structure. Within this framework of “humans as TAPs”, I search for ways to improve.
This is contrast with another view I hold of myself as an “agenty” human that has free will in a meaningful sense. Under this ontology, I’m focusing on metacognition and executive function. Of course, this assertion of my ability to choose and pick my actions seems to be at odds with my first view of myself as a habit-stuffed zombie.
It seems plausible then, that rationality techniques which often seem at odds with one another, like the above examples, occur because they’re operating on fundamentally different assumptions of how to interface with the human mind.
In some way, it seems like I’m stating that every ontology of mind is correct. But what about mindsets that model the brain as a giant hamburger? That seems obviously wrong. My response here is to appeal to practicality. In reality, all these mental models are wrong, but some of them can be useful. No ontology accurately depicts what’s happening in our brains, but the helpful ones can allows us to think better and make better choices.
The biggest takeaway for me after realizing all this was that even my mental framework, the foundation from which I built up my understanding of instrumental rationality, is itself based on certain assumptions of my ontology. And these assumptions, though perhaps reasonable, are still just a helpful abstraction that makes it easier for me to deal with my brain.
- There is No Akrasia by 30 Apr 2017 15:33 UTC; 31 points) (
- Levers, Emotions, and Lazy Evaluators: by 20 Feb 2017 23:00 UTC; 9 points) (
- 27 Feb 2017 9:07 UTC; 0 points) 's comment on Concrete Takeaways Post-CFAR by (
A lot of this post sounds like the NLP (Neuro-Linguistic Programming) way of dealing with mental models.
NLP is not without it’s flaws but a lot of people invested a lot of time into building it.
If you want to go deeper into modeling mental processes in a pragmatic way I would recommend you to read The Emprint Method by Leslie Cameron-Bandler. The book speaks about how to do modeling of thought processes and presents notation for how to do it.
Huh, okay. Yeah, I suspect that everything I’m coming to terms with has already been integrated in some form by people across time / communities.
I do like pragmatism, so I’m making a note to check out the Cameron-Bandler book. Thanks for the reference!
Just to be clear. There’s plenty of NLP literature that’s pragmatic in the sense of wanting to offer quick fixes or specific tricks. This book is not that kind of book but more meta and how to do modelling.
Got it. Thanks for the clarification.
I agree with a lot of the article but I get the feeling you are putting too much weight on the internal dialogue. I am interested to hear your thoughts.
The human mind has the ability to reproduce constructs that were imported from the senses in the imagination. The reproduction of the sense of hearing in the imagination allows us to use mentally reproduced sounds in the form of language and create the internal dialogue. But we can, in the same way, reproduce any of the senses such as reproducing the image of a dog in our mind. You can do the same (depending on the strength of your imagination) for smell, touch, taste etc. and all their combinations in complex scenes.
So apart from exploring the internal dialogue through the internal dialogue itself you have many other options. A few examples:
Observe the dialogue without manipulation
Stop the stream of the imagination containing the internal dialogue for varied amounts of time
Stop all imaginary activity …. etc.
If, for example, you are acting through [3], walking around having only (to the extent it is possible) direct perceptual representations in your mind and no internal dialogue, would you say that you are acting without an operating system?
It is also worth pointing out that rational ‘levers’ (rationalisations) are but one and not necessarily the best strategy to use for mental control. For some examples of the complexities involved I recommend Daniel Wegner’s wonderful book ‘White Bears and Other Unwanted Thoughts: Suppression, Obsession, and the Psychology of Mental Control’.
You are still acting with an operating system. What you perceive depends a lot on your mental categories. You can change what you perceive by learning new categories.
Phonemes in foreign language are interesting. For most native German speakers ‘cap’ and ‘cab’ and ‘believe’ and ‘belief’ sound the same. Normal acquisition of a foreign language in adulthood doesn’t give you the ability to make this distinction. On the other hand I have programmed an Android App that can teach the ability to distinguish the sounds.
The phoneme example is quite simple and easy to explain but the same goes also for more complex categorizations. I learn my anatomy to be able to better perceive human anatomy and this is a standard way of developing finer perceptive skill in the form of bodywork that I am learning.
I was referring to the identification of the operating system with the internal dialogue that I see in the article. But you are making a further point.
It is true that our perception can be refined by the acquisition of new mental categories. If these categories are presented to us in the form of words then we have to correctly perceive them in our environment. These words are the means by which other people communicate to us what they have discovered through their senses. Our own refinement has to be experiential using their words as a map/guideline. The first person that discovers new refinements though may do it in a different way. By looking, listening, touching etc. in a meditative state of concentration, the mind will naturally process the input and discover more and more nuanced patterns and subtleties. The result can be formulated in intellectual terms and then fed back to the process (I am simplifying a more fluid and complex process here).
You can discover new perceptions during meditation but it’s good to find names and ways to conceptualize for them. If you don’t do that it can be mentally destabilizing.
In Pierget’s states of learning you are at accommodation but you don’t go to equilibration.
Learning new categories for conceptualization in a way that they become native categories unfortunately takes more time than a 4-day workshop and as such the focus on teaching powerful individual techniques that CFAR does doesn’t go deep into it. It’s not as easy to demostrate.
If you however actually do practice Gendlin’s Focusing a lot, that technique will lead to the acquisition of new categories that will reach the state of equilibration.
Not sure where you got that conclusion from. Maybe Piaget (if his observation is correct) was referring to child development where conceptualisation is essential.
A sufficiently flexible belief system does not destabilize with new perceptions. Destabilisation happens if the belief system has assigned certainty to unknowns when it shouldn’t have and has built a self image, social relations, opinions, outlooks, moral rules etc. on the unstable foundation. If someone needs conceptualisation, for every new perception, on fear of destabilisation, I would recommend re-examining base assumptions.
I don’t know too much about destabilization, but some more reasons why I think that naming things is good. (If it turns out we agree on this point, feel free to just let it float):
Erm, never mind. I wrote some things, and then I realized you pointed out below that you agree w/ the gist of what I had been pointing to.
It’s a conclusion that comes from the work of Danis Bois. Danis Bois is interesting because he spends decades teaching meditation (among others things). From position of being a sought-after teacher he started going to university to study education and later became a professor at a university in Portugal.
A child deal with new concepts for which it doesn’t have preexisting concepts. It’s quite possible to live as an adult without
If you have someone who had no conceptualization of what Gendlin calls the felt sense and starts having the related experience a lot of their self image might be open for adaptation.
Beforehand they might have thought of themselves as their heads but that concept get’s challenged through the new way of being in relationship with the body.
With the example of Gendlin’s Focusing I’m not sure whether you get those experiences without conceptualization but adopting new conceptualization is important for actually deeply integrating the concepts.
With advanced meditation, there’s usually the point of perceptions coming up that do require new conceptualization. In the absence, there are problems like what’s called “the dark night of the soul”.
I can’t find much info on Bois. You could present his argument in your own words.
I am not following Gendlin either but from what I am reading what he is describing as ‘felt sense’ is listening for emotions, moods and bodily sensations in general. I never had any issues to integrate all these without the ‘felt sense’ concept.
I don’t doubt that people that have not given any thought or practice on what it is to be a human could benefit from using concepts to ease their transition (I explained in the previous post). I just think that after a certain amount of refinement of your belief system destabilisation is not an issue.
I don’t doubt the importance of conceptualisation. I am just suggesting being careful about generalising into saying that it is always needed.
I don’t understand the ‘dark night of the soul’ problem.
Yes, there’s little information that’s published in English.
There are patterns that arise when teaching for decades and seeing different students develop different problems. There empiric experience that dealing in a certain way with those problems is helpful.
It seems like you do have conceptualization as “listening for emotions”, “listenening for moods” and “listening for bodily sensations”. The fact that you separate emotions from moods here shows that you do have more categories than a lot of people.
That might well be true, that you don’t understand the problem or haven’t been exposed to people who suffer from it. That doesn’t mean that it isn’t an issue for many people who spent a lot of time meditating and making the experiences that come along with it.
That is fair. I will have to ponder further if it is at all possible to have no conceptualisation. Thanks!
Yes, of course. I will tell you my thought process on reading about it in the interest of transparency. My first impression is that there seems to be truth in the writing. Since I have a belief, based on experience, in the existence of certain states it seems to me that the author is possibly genuine ( in contrast to being the product of a cult formation ). But the metaphors he is using do not resonate with me. Furthermore, I then refer to the teachings that warn of not following material that was intended for a different time and people. Exercises and formulations, they warn, are always presented in the forms of the current culture and prescribed according to the state of the student. Following exercises and old formulations without guidance can be, at best, just providing emotional and intellectual stimulation and at worst, as dangerous as trying different medicine based on personal preference.
These are some of my thoughts.
When I talk about knowledge that comes from Danis Bois, I do have personal guidance from people who have their 10,000 hours.
As far as the term “the dark night of the soul” goes, it’s not a Bois term but a term used by people from quite different backgrounds.
It’s one thing not to take a medicine on personal preference. It’s another thing that when you take a nootropic for which there’s little academic research out there and you have the people who have empiric experience with the nootropics warning you of possible states that are associated with it.
I don’t believe that the 10,000 hours rule bears any relevance on judging the suitability of people in these areas. But I can not pretend to have the right or the ability to judge Danis Bois’ as I know nothing about him. The same goes when it comes to ‘the dark night of the soul’ concept and its author.
Yes of course. I guess it all depends on the quality of advice we get from people. Which depends on their level of attainment. For which our judgement depends on our assumptions on what constitutes ‘attainment’. Which is not necessarily what in reality constitutes attainment. Which only the people that have attained know. That is, If attainment is possible at all.
This is a really complex discussion that would require an extensive exposition of our belief systems and which would throw this comment thread completely off topic. Maybe another time :)
It’s not a concept with a single author. It’s a concept that was first used in Christian medieval theology. In the context where I have seen it, it was repurposed for what happens to some people who practice a lot of Buddhist meditation and an LW comment from http://lesswrong.com/lw/5h9/meditation_insight_and_rationality_part_1_of_3/42b2 was likely the first time I came across it. I used it in the past to talk with other rationalists about meditation who don’t necessarily share a meditative tradition with me and the fact that people from different traditions can relate to it gives it substance in my mind.
Thank you for the link. That was a clearer definition of the dark night. I also skimmed Ingram’s book. I am following a different approach. Again here are my thoughts without trying to imply certainty.
Although I meditate I would never follow extensive exercises of the ones described in ingrams book. Instead I exercise patience and work on the more mundane fundamentals as indicated. There are clear warnings in traditional material not to choose exercises on what we think it is interesting or induce mystical experiences as the result is a type of spiritual vanity in which the experience is used for self inflation. In that light I would see the dark night of the soul not as something to push through but as a sign that I am following the wrong path. Unfortunately, people that reach this stage are probably too invested to admit that they have been following the wrong path for so many years. It is easier for the self to push through and avoid the pain of realising it’s true stage of development.
People can find plenty of material in the work of idries shah if they can deal with someone that tells them not to do all these things that are emotionally or intellectually exciting. It is at least easy to observe that starting a quest to be freed from the self by choosing exercises through the self could not possibly work.
Sorry if I gave that impression. I don’t equate operating system to internal dialogue, FWIW.
Yes, I must have gotten the wrong impression. Would you say it is fair of me to say that you are putting emphasis on the internal dialogue, as you refer to it, when you talk about ‘thinking about thinking’?
Maybe I can try to articulate better what gives me the impression of us talking from different perspectives. I feel that you are putting emphasis on thinking in a meta level when I consider even more important going out of thought itself to get an even better view of the human mind. That is not to say that thinking should stop or that meta thinking is not really useful. It is just my view that for people like me (us?), who are obsessed with rational analysis to the point of using it as a source of pleasure here in LW, taking long breaks from intellectual analysis is essential.
Hmmm, I think that meta-level thinking = 1 form of an operating system.
I think that operating system = the sorts of mental concepts you believe are atomic, the things which make up your internal picture of your mind.
Other potential operating systems:
Your feelings as a data trove, where all your feelings form an important part of your worldvliew
Resolve as a worldview, which is very Nietzschean in nature and focused on human determination and Actually Trying.
The sorts of paradigms used in meditation, where you’re focused on awareness of your bodily sensations and are trying to cultivate a general sense of “this is where I am” and other things I’m more unfamiliar with.
Now my claim is that none of the paradigms (i.e. “operating systems”) are incorrect, but rather, they can all be useful abstractions for thinking about how your mind works, and using each one as a basis for creating “rationality techniques” can source skills that look very different from one another.
Ok, I have to switch gears a bit. I will try to get into a mode of not arguing towards something but just telling you my thoughts as they are; as an experiment in communication. If you find this unhelpful feel free to ignore my comment!
I see what you are saying and I am wondering; what am I arguing about? Our exchange gives me a sense of being vague. As if we are not communicating properly and there isn’t a clear focus on what we are talking about. I think this might be because of the vagueness of my original criticism. After some reflection on my previous comments, and reading some of your links, I can express my disagreement as:
Articulating concepts is obviously useful. But this has been done by people for centuries. There are a few possibilities:
[1] No culture has in the past reach the level of understanding we have at the moment so this is the time to create new concepts that would allow us to understand humans. By us I mean me and you in this discussions.
[2] The material is out there and is constantly maintained by the people that have the knowledge.
It seems that in my belief system I have reached the conclusion that [2] is the case.
(If you are getting ‘evangelist’ alert I assure you this is not going to happen. Also to reiterate: I am not religious)
So, to sum up. It is my belief that you are trying to reinvent the wheel here. I am aware that stating something like that without offering solutions and material is kind of a shitty argument. But hey, now you know something about what I really think...
Ah, no worries! I agree that what I am doing has already been invented in some form or another (see NLP). I take it as a given that what I’m writing here may already be articulated by other people in a much more informative way.
My general goals here are to illustrate what I believe, and if people see connections to existing concepts, I’m happy if they point out “hey, lifelonglearner, this idea is already a thing in the form of concept X!”, because then I can read more on it.
I think that is a great use of humility! I am attempting to do the same :)
Hi Erfeyah,
Thanks for your thoughts!
I agree that your examples give a much broader sense of things that can happen in the mind. There’s also things like recalling memories / daydreaming which also don’t seem to fall into my category.
I’m unsure if the sense-reproduction point would be exactly the same, but I’m not a neurologist. (Like, would the same parts of the brain that govern visual perception light up when you ask someone to imagine a dog? What about people with not so great visual imaginations?)
I think that if you were walking around with [3], you wouldn’t currently be implementing a useful operating system in the sense of allowing you to do cool things with your brain. To be clear, I don’t always think that your brain is under an operating system, but the idea of an ontology seems to be a useful abstraction that explains why rationality techniques seem to be different or why learning them can give people a general mindset boost.
(Is that clearer? Unsure if I got the meaning across.)
The Wegner book is new to me. Thanks for pointing it out!
I’m unsure if we’re pointing to the same thing when we say “lever”, though, as my explanation in this essay was pretty shoddy. By “lever”, I mean a mental action that is available to you, like many of the CFAR techniques like Murphyjitsu. Like, it’s a mental “motion” you can go through in your brain, a little like a step-by-step algorithm.
Heyo :)
You mean exactly the same as perceiving it? It would not be because in perception the image is continuously updated by the senses. There are practices that purport to reach extremely high levels of visualisation skills but I do not have experience with them. My visualisation skills are not that great either. But it is still a fact that we can mentally reproduce, with varying amounts of skill, any of our senses internally. Notice how good we are with the internal dialogue’s sound reproduction which we have been practising all our lives.
I just think that we should not call it operating system as the brain can operate without the need for constant intellectual input. I believe it is important as it is a kind of blind spot of western culture that considers the internal dialogue as a constant. So I instead might refer to it as ‘the intellect’, parts of which are ‘belief systems’ and the ‘methods of rationality’.
Also, there are cool things to do with your brain by stopping the intellectual part. Your answer gave me the impression that you would consider walking around (or doing anything else) not thinking as a missed opportunity or a waste of time. If that is so, I can say, from personal experience which you may or may not believe I have, that you are mistaken.
I agree. I consider an ‘ontology’ to be a ‘belief system’ and I think there is definitely value in realising that! If you reach that realisation through rationality that is great! But there are other ways so I am not sure about the ‘different’ characterisation.
I agree that from the sense of my internal experience, I can use solely my brain to conjure up internal states similar to those I’d get from actually having sense experience. So I think that it can feel much the same from the inside; I think we both agree that it’s not exactly the same thing, though?
My internal dialogue is not always on, and I am aware of this, so I think we’re in agreement here too. If “operating system” isn’t the right word, I’m trying to point at “a way of internally representing your mind that gives you increased perspective on how your mind works, which might also cache certain algorithms that are representation dependent.” Maybe “abstraction layer” already covers most of this?
I think we’re in agreement here as well. I suspect it’d be taxing and suboptimal for people to be meta-aware all of the time, vis a vis the textbook rationality mindset. I think that rationality is one of several ontologies you can be using, which might be good for achieving certain goals.
In other cases, things like “blank time” where you’re not thinking can be good for letting your brain just run on its own. Or, deliberate “non-thinking” can have helpful effects too.
By “different”, I just mean how rationality techniques like focusing and precommitment seem to be based off of differing assumptions of how the mind works. I think that’s pretty reasonable?
The textbook definition of rationality from Thinking and deciding is:
Seems good. I was trying to gesture at the typical LW definition, but this is helpful too. Thanks!
Yes, we don’t seem to disagree much. Just to clarify a few points one last time and answer your questions.
Yes, I agree. They are of much lower clarity. Nevertheless, I do currently believe it is basically the same process. I am basing this on my exploration of the dream state in which without external stimulus I can get perfect realistic (and even hyper-realistic) full sensory experiences.
(sorry, I know this is not really on topic with your post. I was just pointing to the internal dialogue which we now agree on)
My opinion is that there is no need to create a label. ‘Belief system’ already exists as a term and I think it describes what you are talking about. You became aware of what a ‘belief system’ is and realised you can change it. It is a powerful realisation. You could use other terms as well as metaphors if you are trying to teach people of what a belief system is, but that is another subject.
Just to make sure we are talking about the same things here. What do you mean ‘meta-aware’? Could you describe to me your internal experience when you are meta-aware?
Could you also clarify what you mean with “blank time”. When we leave our brain to run on its own the internal dialogue does not stop. We are just flooded with automatic thoughts. No?
There are types of meditation that lead to mental states where the flooding stops.
Indeed, that is what I am pointing at. I am just not sure lifelonglearner realises that these states are possible.
Erm, I’m trying to point to the sort of mindfulness where you’re thinking about your tasks rather than just doing them. Something like, “Ah yes, now let’s go and do this written assignment. Okay, looks like I need to write about post-modernism. What do I know about post-modernism? Huh, I notice I am feeling bored...”
I think there’s several things I was trying to describe. I’ve found that leaving my brain to run on its own can be good, but it feels less like there’s an internal observer that’s speaking. Thoughts just sort of stream in, in wordless impulses and hazy flashes.
But I also buy the claim that complete “non-thinking” can be helpful for Reasons, which I assume is not the same thing as the above.
Yes, that is what I thought. If you are thinking in sound/language you are using your internal dialogue. Mindfulness is pausing your internal dialogue and focusing your senses on the current experience. Not talking to yourself about what you are doing. What I think is happening, and this is an assumption on my part, is that you have never experienced the stopping of your internal dialogue as it seldom happens spontaneously.
I am not sure what you mean by ‘Reasons’ but I do believe practising meditation, concentration and contemplation (please avoid, in your mind, associating my use of the terms to religion) are essential. As essential as practising our intellectual skills.
I’ve done some mindfulness mediation, but I haven’t made it into a consistent practice. I was trying to compress the sorts of potential health benefits, general life improvements etc. into a black box of “reasons” (which otherwise might have spiraled into its own discussion) with the capital “Reasons”.
Ah sorry. Didn’t get it :)