Last time we were discussing the Axial Age within ancient India and we were focusing in on a pivotal figure of Siddhartha Gautama (the Buddha) and we had been talking about his particular story. We talked about the two modes of being that were being represented in his story of leaving the palace: the having mode and the being mode. We talked about modal confusion and about overcoming it.
We followed him to where he’s sitting under the Bodhi tree and he achieves a deep kind of realization, a deep state of enlightenment. Along the way, we had discussed what mindfulness is, how mindfulness operates through attentional scaling and how it can increase your cognitive flexibility, your capacity for insight, and then we were trying to draw this all together with some cognitive science discussion of what it is to experience enlightenment.
Now I’m not offering right now a complete account or anything like a comprehensive theory of enlightenment; we’re gonna be slowly working towards that as we move through this lecture series, but I do want to get into and can continue the discussion of these higher states of consciousness.
So if you remember they’re very problematic, but that they’re at the core of many of the Axial Age world religions and foundational philosophies. This is the idea that people have an alternative state of consciousness that they regard as somehow more real than their everyday state of consciousness, and that’s problematic precisely because we tend to judge realness by how well we get an overall coherence in our intelligibility (how we’re making sense of things) but in these altered states that are very different from our everyday consciousness and therefore do not cohere with it, people do the alternative. Instead of rejecting it the way we reject dreaming (for example) because it doesn’t cohere with our everyday experience, people reject the everyday experience as illusory and they say that this state of consciousness somehow gives them an improved access to reality. As you remember, as we’ve been going through the Axial Age Revolution and the sense of wisdom and meaning that is attended upon it, this ability to transcend through illusion and get connected to what is more real is central to what wisdom means and having some deep sense of connectedness to reality is also central to what it is to regard one’s life as authentically meaningful in some fashion.
What are some of the metrics people use, to judge whether something felt “real?” What are some metrics used to resolve fork-conflicts, between different ways of making sense of the world?
What does it mean, when these are different, and how do you resolve that conflict?
(A few example conflicts: A dream that is obviously not self-consistent, but still makes useful predictions. A vivid memory you have, that none of your friends can recall. A high-confidence intuitive prediction you could make whose certainty colors your perception, but which others insist is based on invalid starting premises.)
A bit of context: I ended up with an odd connection between the way he described a “Realness-gauging heuristic,” and how Blockchain works, that I wanted to share. This eventually led to the question bubbling up.
Vervaeke mentioned that a problem with some Higher State of Consciousness (HSC) experiences is that some people experience an “Axial Revolution in miniature,” and decide that the real world is the dream, and their experience in the altered state was the reality. (Which they usually feel a need to return to, due to what he dubbed a “Platonic meta-drive” towards realness.)
Usually, with altered states (ex: literal dreaming), one ends up treating the altered state as a dream-like subjective experience, and understand your waking-life as reality. In these cases, this seems to get flipped.
To paraphrase Vervaeke...
Realness is the pattern of intelligibility with the widest, richest scope. It makes the most sense of your experience; your beliefs, your memories, etc.
The way I interpret this is that one of the common heuristics to ascertain “realness” is to search for the most extensive, highest-continuity, or most vividly experienced comprehension algorithm that you’ve ever built.
This calls faintly to mind fork-resolution in blockchains.
For the most part, blockchains branch constantly, but by design turn whatever is the longest and most-developed legal branch into the canonical one*. This is not purely continuous, since this is not always the same chain over time; one can overtake another. As long as it’s the the longest, it becomes the “valid” one.
While this is one of the simplest fork-resolution metics to explain, it is not the only one.
Other varieties of forking (ex: a git repo for a software package) may use other canonicity-resolution heuristics. Here’s a very common one: for a lot of projects, the most-built one is called an “Alpha” while the canonical version numbers are reserved for branches deemed debugged or “sufficiently stable.”
(It is also sometimes possible to provide an avenue for re-integrating or otherwise feeding an off-branch to a main one (ex: uncles), but this can get complicated rather quickly.)
* With the notable exception of hard-forks: a rare event, where there is a social move to quash the validity of a chain in which a substantial misuse has occurred. Coming up with similar cases in history or social reality is left as an exercise for the reader.
One of the things that impressed me a lot about Vervaeke in this episode was naming my crux and meeting it. Like I talk about in Steelmanning Divination, often I’ve written off something for good reasons, and then come across a statement of the thing that says “yes, it runs afoul of X and Y, but even knowing that I think you should look at Z,” and this is a pretty compelling reason to look at Z!
So Vervaeke is familiar with dreams, and expects his audience to be familiar with dreams. Your sense of how much things cohere can be hacked! I realized this as the result of direct experience many years ago, as presumably have most people, and so any claim of states of consciousness that are more in touch with reality than the default state of consciousness, rather than less in touch with it, has a high bar of evidence to clear. The default presumption should be “how are you sure it isn’t just hacking your sense of how much things cohere?”
Vervaeke is also familiar with the unreliability of the propositional knowledge that comes out of these experiences. Some people see God while high, other people see the absence of God while high. Surely this means it’s not a reliable source of knowledge. Contrast to fictional situations; if the DMT entities could in fact factor large numbers, this would be very compelling evidence about them! Or in the world of Control, people in the Astral Realm see a black pyramid, in a way that makes the propositional knowledge gained there reliable.
So Vervaeke’s story is: these mystical experiences are not about propositional knowledge.
People will say varied metaphysical claims. What’s changing is not the content; not this or that piece of knowledge. What’s changing is your functioning, you’re not gaining knowledge you’re gaining wisdom. You’re gaining skills and sensibilities and sensitivities of significance landscaping that radically transform your existential mode. That is why, for example, that the Buddha famously refused to answer metaphysical questions about Nirvana / about enlightenment, because that’s not the point. That’s not what this is about. This is not about getting supra-scientific knowledge, this is about getting extraordinary wisdom and transformation.
This seems pretty promising to me as an account (tho it’s obviously not complete). Dreams might be random soup, but if I realize an error in my thinking because of a dream and that realization persists when I’m sober, and stands up to conversations with friends, then I can be pretty confident that I was in fact making a mistake before and the dream gave me whatever insight I needed to fix it. There might be some very deep mistakes that I’m making, such that I need very vivid dreams to fix them. see Mental Mountains for discussion along these lines.
But this is going a step further than that. Often people who wake up from a dream long to return to the dream once sober. I’m not sure how many would actually prefer a dream world to the real world, this is a common enough trope that I suspect ‘many’. From Inception:
Elderly Bald Man : [towards Cobb] No. They come to be woken up. The dream has become their reality. Who are you to say otherwise, son?
As well, there’s an old point in AI alignment that, well, things that change your utility function are to be avoided by default. “Significance landscaping” is, essentially, the utility function; if I’m going to change that, I pretty clearly want to not change it randomly. Taking heroin, for example, would change my significance landscaping to make heroin much more significant to me. This seems like a bad move, and so I don’t. So in order to think this mystical experiences are better to have than not have, the connection to wisdom needs to developed.
[And also the line I’ve been bringing up so far—where if wisdom is choosing the ‘spiritual realm’ over the ‘secular realm’, then that’s actually a mistake if there’s just a secular realm—needs to be addressed. This is the ‘collapse of religion’ in miniature—if we used to use religion to get people to get over their irrationalities with the carrot of heaven, but people have now realized that heaven isn’t real and so the carrot is a trick, well, we still need some way to get people to get over their irrationalities, to the extent that’s a thing that’s good to do!]
So Vervaeke is familiar with dreams, and expects his audience to be familiar with dreams. Your sense of how much things cohere can be hacked! I realized this as the result of direct experience many years ago, as presumably have most people, and so any claim of states of consciousness that are more in touch with reality than the default state of consciousness, rather than less in touch with it, has a high bar of evidence to clear. The default presumption should be “how are you sure it isn’t just hacking your sense of how much things cohere?”
Doesn’t detract from your point, but I find it interesting that you interpreted dreams as evidence in this direction rather than the opposite. After all, when we are awake, we know we are awake, and correctly feel that our reality is more coherent and true than dreams are. The opposite isn’t true: if we realize we’re dreaming, we typically also realize that the content isn’t true; we don’t end up thinking that dreams are actually more true that reality is. Rather, finding dreams to be coherent requires us to not realize we’re dreaming.
So feels like someone could just as easily have generalized this into saying “if there’s an alternate state that on an examination feels more true than ordinary wakefulness does, then it’s likely to actually be more true, in the same way as ordinary wakefulness both feels and is more true than dreams are”.
One of the things that impressed me a lot about Vervaeke in this episode was naming my crux and meeting it. Like I talk about in Steelmanning Divination, often I’ve written off something for good reasons, and then come across a statement of the thing that says “yes, it runs afoul of X and Y, but even knowing that I think you should look at Z,” and this is a pretty compelling reason to look at Z!
Yes I also noticed that with Vervaeke. He would often start talking about something that sounds crackpot-ish or like straight up bullshit, but then immediately mention my objection and go on to talk sense. Last episode had an example of that with “Quantum Change”, which is something i wouldn’t even bother listening to, but he immediately criticized the name and said that the theory is good in spite of it, so I was open to hearing it out.
Towards the end he’s talking about how these transformative experiences people have, these ‘quantum changes’, don’t give people any new knowledge, they give people more WISDOM. But his examples puzzled me.
He says, one person comes out of the transformatice experience and says “I knew that God exists”, and then another person comes out and says “I knew that there was no God.”
So my question is, what kind of valid “wisdom” can produce BOTH of those results? Is it just a type of wisdom that transforms the meaning each of these people assigns to the word God?
Around 53-55 minutes of the podcast if anyone wants to see what i’m referring to.
So my question is, what kind of valid “wisdom” can produce BOTH of those results? Is it just a type of wisdom that transforms the meaning each of these people assigns to the word God?
I’m not quite sure what you mean by “transforms the meaning”; but I agree with at least one version of that.
The way I’d elaborate on it is that “God exists” is more like an internal label for internal experience instead of a shared label for shared experience. Two people talking about ‘the sun’ can be pretty sure they’re talking about the same thing in the outside world; not so for two people talking about God.
And so in a transformative experience, someone might shift their anchor beliefs, and they might not have better labels for those beliefs than “God exists” or “God doesn’t exist”, while those point to different things in more complicated language. (For example, one idea that I might compress into “God exists” is “it is better to face life in an open-hearted and loving way”, and another idea that I might compress down to “God doesn’t exist” is “wishful thinking doesn’t accomplish anything, planning does”. Both of those more complicated beliefs can be simultaneously true!)
Episode 10: Consciousness
A question:
What are some of the metrics people use, to judge whether something felt “real?” What are some metrics used to resolve fork-conflicts, between different ways of making sense of the world?
What does it mean, when these are different, and how do you resolve that conflict?
(A few example conflicts: A dream that is obviously not self-consistent, but still makes useful predictions. A vivid memory you have, that none of your friends can recall. A high-confidence intuitive prediction you could make whose certainty colors your perception, but which others insist is based on invalid starting premises.)
A bit of context: I ended up with an odd connection between the way he described a “Realness-gauging heuristic,” and how Blockchain works, that I wanted to share. This eventually led to the question bubbling up.
Vervaeke mentioned that a problem with some Higher State of Consciousness (HSC) experiences is that some people experience an “Axial Revolution in miniature,” and decide that the real world is the dream, and their experience in the altered state was the reality. (Which they usually feel a need to return to, due to what he dubbed a “Platonic meta-drive” towards realness.)
Usually, with altered states (ex: literal dreaming), one ends up treating the altered state as a dream-like subjective experience, and understand your waking-life as reality. In these cases, this seems to get flipped.
To paraphrase Vervaeke...
The way I interpret this is that one of the common heuristics to ascertain “realness” is to search for the most extensive, highest-continuity, or most vividly experienced comprehension algorithm that you’ve ever built.
This calls faintly to mind fork-resolution in blockchains.
For the most part, blockchains branch constantly, but by design turn whatever is the longest and most-developed legal branch into the canonical one*. This is not purely continuous, since this is not always the same chain over time; one can overtake another. As long as it’s the the longest, it becomes the “valid” one.
While this is one of the simplest fork-resolution metics to explain, it is not the only one.
Other varieties of forking (ex: a git repo for a software package) may use other canonicity-resolution heuristics. Here’s a very common one: for a lot of projects, the most-built one is called an “Alpha” while the canonical version numbers are reserved for branches deemed debugged or “sufficiently stable.”
(It is also sometimes possible to provide an avenue for re-integrating or otherwise feeding an off-branch to a main one (ex: uncles), but this can get complicated rather quickly.)
* With the notable exception of hard-forks: a rare event, where there is a social move to quash the validity of a chain in which a substantial misuse has occurred. Coming up with similar cases in history or social reality is left as an exercise for the reader.
One of the things that impressed me a lot about Vervaeke in this episode was naming my crux and meeting it. Like I talk about in Steelmanning Divination, often I’ve written off something for good reasons, and then come across a statement of the thing that says “yes, it runs afoul of X and Y, but even knowing that I think you should look at Z,” and this is a pretty compelling reason to look at Z!
So Vervaeke is familiar with dreams, and expects his audience to be familiar with dreams. Your sense of how much things cohere can be hacked! I realized this as the result of direct experience many years ago, as presumably have most people, and so any claim of states of consciousness that are more in touch with reality than the default state of consciousness, rather than less in touch with it, has a high bar of evidence to clear. The default presumption should be “how are you sure it isn’t just hacking your sense of how much things cohere?”
Vervaeke is also familiar with the unreliability of the propositional knowledge that comes out of these experiences. Some people see God while high, other people see the absence of God while high. Surely this means it’s not a reliable source of knowledge. Contrast to fictional situations; if the DMT entities could in fact factor large numbers, this would be very compelling evidence about them! Or in the world of Control, people in the Astral Realm see a black pyramid, in a way that makes the propositional knowledge gained there reliable.
So Vervaeke’s story is: these mystical experiences are not about propositional knowledge.
This seems pretty promising to me as an account (tho it’s obviously not complete). Dreams might be random soup, but if I realize an error in my thinking because of a dream and that realization persists when I’m sober, and stands up to conversations with friends, then I can be pretty confident that I was in fact making a mistake before and the dream gave me whatever insight I needed to fix it. There might be some very deep mistakes that I’m making, such that I need very vivid dreams to fix them. see Mental Mountains for discussion along these lines.
But this is going a step further than that. Often people who wake up from a dream long to return to the dream once sober. I’m not sure how many would actually prefer a dream world to the real world, this is a common enough trope that I suspect ‘many’. From Inception:
As well, there’s an old point in AI alignment that, well, things that change your utility function are to be avoided by default. “Significance landscaping” is, essentially, the utility function; if I’m going to change that, I pretty clearly want to not change it randomly. Taking heroin, for example, would change my significance landscaping to make heroin much more significant to me. This seems like a bad move, and so I don’t. So in order to think this mystical experiences are better to have than not have, the connection to wisdom needs to developed.
[And also the line I’ve been bringing up so far—where if wisdom is choosing the ‘spiritual realm’ over the ‘secular realm’, then that’s actually a mistake if there’s just a secular realm—needs to be addressed. This is the ‘collapse of religion’ in miniature—if we used to use religion to get people to get over their irrationalities with the carrot of heaven, but people have now realized that heaven isn’t real and so the carrot is a trick, well, we still need some way to get people to get over their irrationalities, to the extent that’s a thing that’s good to do!]
Doesn’t detract from your point, but I find it interesting that you interpreted dreams as evidence in this direction rather than the opposite. After all, when we are awake, we know we are awake, and correctly feel that our reality is more coherent and true than dreams are. The opposite isn’t true: if we realize we’re dreaming, we typically also realize that the content isn’t true; we don’t end up thinking that dreams are actually more true that reality is. Rather, finding dreams to be coherent requires us to not realize we’re dreaming.
So feels like someone could just as easily have generalized this into saying “if there’s an alternate state that on an examination feels more true than ordinary wakefulness does, then it’s likely to actually be more true, in the same way as ordinary wakefulness both feels and is more true than dreams are”.
Yes I also noticed that with Vervaeke. He would often start talking about something that sounds crackpot-ish or like straight up bullshit, but then immediately mention my objection and go on to talk sense. Last episode had an example of that with “Quantum Change”, which is something i wouldn’t even bother listening to, but he immediately criticized the name and said that the theory is good in spite of it, so I was open to hearing it out.
Towards the end he’s talking about how these transformative experiences people have, these ‘quantum changes’, don’t give people any new knowledge, they give people more WISDOM. But his examples puzzled me.
He says, one person comes out of the transformatice experience and says “I knew that God exists”, and then another person comes out and says “I knew that there was no God.”
So my question is, what kind of valid “wisdom” can produce BOTH of those results? Is it just a type of wisdom that transforms the meaning each of these people assigns to the word God?
Around 53-55 minutes of the podcast if anyone wants to see what i’m referring to.
I’m not quite sure what you mean by “transforms the meaning”; but I agree with at least one version of that.
The way I’d elaborate on it is that “God exists” is more like an internal label for internal experience instead of a shared label for shared experience. Two people talking about ‘the sun’ can be pretty sure they’re talking about the same thing in the outside world; not so for two people talking about God.
And so in a transformative experience, someone might shift their anchor beliefs, and they might not have better labels for those beliefs than “God exists” or “God doesn’t exist”, while those point to different things in more complicated language. (For example, one idea that I might compress into “God exists” is “it is better to face life in an open-hearted and loving way”, and another idea that I might compress down to “God doesn’t exist” is “wishful thinking doesn’t accomplish anything, planning does”. Both of those more complicated beliefs can be simultaneously true!)
love this response. thanks