As an intellectual claim, this does not sound too surprising: few people would seriously think that either physical things or mental experiences last forever. However, there are ways in which impermanence does contradict our intuitive assumptions.
A conventional example of this is change blindness. In a typical change blindness experiment, people report having good awareness of the details of a picture shown to them: but when details are changed during an eye saccade, subjects fail to notice any difference. Maybe a person’s hat looks red, and people who have been looking right at the hat fail to notice that it looked green just a second ago: the consciousness of the green-ness has vanished, replaced entirely with red.
People are typically surprised by this, thinking that “if it was red a second ago, surely I would remember that”—a thought that implicitly assumes that sense percepts leave permanent memories behind. But as long something does not explicitly store a piece of conscious information, it is gone as soon as it has been experienced.
This is a natural consequence of the Global Neuronal Workspace (GNW) model of consciousness from neuroscience. As I have previously discussed, studies suggest that the content of consciousness corresponds to information held in a particular network of neurons called the “global workspace”. This workspace can only hold a single piece of conscious content at a time, and new information is constantly trying to enter it, replacing the old information.
Now if the content of your consciousness happens to be something like this:
0 milliseconds: Seeing a red hat
53 milliseconds: Thinking about cookies
200 milliseconds: Seeing a green hat
Then at the 200 millisecond mark, unless some memory system happened to explicitly store the fact of seeing a red hat before, no trace of it remains in consciousness for the person to compare with. One can train particular subsystems to monitor the contents of consciousness and send occasional summaries of previous contents, which is part of what investigating impermanence involves.
Absolute transience is truly the actual nature of experiential reality.
What do I mean by “experiential reality”? I mean the universe of sensations that you directly experience. [...] From the conventional perspective, things are usually believed to exist even when you no longer experience them directly, and are thus inferred to exist with only circumstantial evidence to be relatively stable entities. [...] For our day-to-day lives, this assumption is functional and adequate.
For example, you could close your eyes, put down this book or device, and then pick it up again where you left it without opening your eyes. From a pragmatic point of view, this book was where you left it even when you were not directly experiencing it. However, when doing insight practices, it just happens to be much more useful to assume that things are only there when you experience them and not when you don’t. Thus, the gold standard for reality when doing insight practices is the sensations that make up your reality in that instant. Sensations that are not there at that time are not presumed to exist, and thus only sensations arising in that instant do exist, with “exist” clearly being a problematic term, given how transient sensations are.
In short, most of what you assume as making up your universe doesn’t exist most of the time, from a purely sensate point of view. This is exactly, precisely, and specifically the point. [...] sensations arise out of nothing, do their thing, and vanish utterly. Gone. Entirely gone.
In Ingram’s terms, people subconsciously assume that if a person in a picture has a red hat, then the person in the picture is going to keep having a red hat. Also, if the person in the picture has a green hat, they probably also had a green hat when you last looked at them. This kind of an assumption is often pragmatically useful, and may even be a true claim about the world, for as long as the image you are looking at is not being manipulated by researchers who keep changing subtle details. But it is not an accurate model of how your own mind functions.
Consciousness as an FBI report
In a previous article, I mentioned that according to neuroscientist Stanislas Dehaene, one of the functions of consciousness is for subsystems in the brain to exchange summaries of their conclusions. He offered the analogy of the US president being briefed by the FBI. The FBI is a vast organization, with thousands of employees: they are constantly shifting through enormous amounts of data and forming hypotheses about topics with national security relevance. But it would be useless for the FBI to present to the president every single report collected by every single field agent, as well as every analysis compiled by every single analyst in response. Rather, the FBI needs to internally settle on some overall summary of what they believe is going on, and then present that to the President, who can then act based on the information. Similarly, Dehaene suggests that consciousness is a place where different brain systems can exchange summaries of their models, and to integrate conflicting evidence in order to arrive to an overall conclusion.
In a similar way, it’s usually not necessary for the brain to keep a conscious track of every little detail in an image. Rather, sensory information comes in, and subsystems responsible for processing it broadcast a summary of what they consider important about it. If you look at a painting, a general summary of its contents will be produced and maintained in consciousness, while minor details like the color of someone’s hat won’t be recorded unless a person had a particularly important reason to look at it. (That would be the equivalent of the FBI including a field agent’s random observations in a report to the president. They’re very unlikely to include those unless they are really important.)
When you look at a person, what you perceive is not a series of shapes and colors that correspond to what’s there, but rather a bunch of hastily constructed symbols that convey the information that the brain thinks is important. If you haven’t rewired your brain for drawing, then “important” questions do not include “Is that elbow angled at 90 degrees or 75?” or “Where are the eyes in relation to the top of the head?” Instead, what you usually care about are things like “is this person happy, or angry?” and the information that gets recorded is a little tag that says “Smiling” with a vague curving-upwards-line symbol accompanying it.
A large chunk of the information we usually need has to do with the face. This plays a role in two common biases that are near-universal in inexperienced artists:
-Drawing the head much larger than it actually is, compared to the rest of the body
-Drawing the “face” (i.e. everything between the eyebrows and mouth) as if they took up the entire head rather than the bottom half. Practically everything above the eyebrows conveys no relevant information, so it’s just ignored.
Your brain has a mental model of what a human is “supposed” to look like, and that model is wrong. You can see major gains in drawing capability just by learning the “ideal” proportions of a human being.
The relation of this to impermanence is that observing the contents of your mind lets you notice just how little sense data is actually used, and how quickly it vanishes. Returning to Daniel Ingram:
We are typically quite sloppy about distinguishing between physical and mental sensations (memories, mental images, and mental impressions of other physical or mental sensations). These two kinds of sensations alternate, one arising and passing and then the other arising and passing, in a quick but perceptible fashion. Being clear about exactly when the physical sensations are present will begin to clarify their slippery counterparts—flickering mental impressions—that help co-create the illusion of continuity, stability, or solidity. [...]
Each one of these sensations (the physical sensation and the mental impression) arises and vanishes completely before another begins, so it is possible to sort out which is which with relatively stable attention dedicated to consistent precision and to not being lost in stories. This means that the instant you have experienced something, you can know that it isn’t there anymore, and whatever is there is a new sensation that will be gone in an instant. There are typically many other momentary sensations and impressions interspersed with these, but for the sake of practice, this is close enough to what is happening to be a good working model.
Ingram suggests that between physical sensations, there are mental sensations which “fill in the gaps”, and which prevent people from noticing that the original physical sensations only come in sporadically. As people become more adept at meditation practices such as following the breath, they may come to notice that a large part of their time has been spent on following athought about the breath, rather than the breath itself: and far less sensory information about the breath actually comes to consciousness than they assumed.
In an earlier article on insight meditation I gave another example about these kinds of mental sensations. I mentioned a time when I was doing concentration meditation, using an app that played the sound of something hitting a woodblock, 50 times per minute. As I was concentrating on listening to the sound, I noticed that what had originally been just one thing in my experience—a discrete sound event—was actually composed of many smaller parts. The beginning and end of the sound were different, so there were actually two sound sensations; and there was a subtle visualization of something hitting something else; and a sense of motion accompanying that visualization. I had not previously even been fully aware that my mind was automatically creating a mental image of what it thought that the sound represented.
Continuing to observe those different components, I became more aware of the fact that my visualization of the sound changed over time and between meditation sessions, in a rather arbitrary way. Sometimes my mind conjured up a vision of a hammer hitting a rock in a dwarven mine; sometimes it was two wooden sticks hitting each other; sometimes it was drops of water falling on the screen of my phone.
Normally, all of this would just be packaged together into a general impression of “I’m hearing some sound”. Our raw sense data is made up of countless small details and sensations, each arising and passing away in rapid succession—but we mostly perceive the high-level summaries, which are much more static. This creates an experience of seeing solid and discrete objects, and a feeling of there being permanent objects.
So how does one actually come to see what is happening in their mind?
In The Mind Illuminated, meditation teacher and former neuroscientist John Yates (Culadasa) suggests that one way this happens is by taking the subsystems responsible for producing such summaries and directing them to produce summaries about the content of consciousness. The brain already has a subsystem that generates overall summaries of what’s going on in your mind; you can train that system to produce more detailed reports. Yates calls such summaries introspective awareness (discussed in more detail in an earlier article).
The impermanence of the self
As I have discussed, consciousness involves a constant competitive process, where different subsystems send content to the global workspace. At any given time, only one of these pieces of content is selected to become the content of consciousness. We might say that there has been a “subsystem switch” or a “subsystem swap” when the content of consciousness changes from that submitted by one subsystem to that submitted by another.
In normal circumstances, the structure of your mind is such that you cannot directly notice the different subsystems getting swapped in and out. Your consciousness can only hold one piece of information at a time. Suppose that at one moment, you are thinking of your friend, and at the next you are thinking of candy. When you think of candy, you are no longer aware of the fact that you were thinking of your friend the previous moment. You can often infer that a subsystem switch has happened, but you can’t actually experience the switch.
However, if you develop more detailed introspective awareness, the stream of your consciousness may include reports such as this:
Subsystem 1: So I was talking with my friend and she said…
Subsystem 2: Ooh, candy.
Awareness subsystem: The train of thought about my friend switched to a train of thought about candy right now.
Subjectively, this feels like becoming aware of the subsystem swapping in real time: a thought comes in, while an “afterimage” of the previous thought lingers for a brief moment, enough to make you realize that one kind of thought has replaced the other. If the trains of thought are different enough, the transitions between might feel really sharp and distinct.
You may also notice that you have two or more separate thought streams going in parallel, while having had no awareness of the fact. At one time you are thinking about your friend, and at the other time you are thinking about candy. Despite the fact that these two thought streams have kept alternating, maybe switching once every couple of seconds, they have been entirely unaware of each other. First the candy is everything that is in your mind, then your friend, then the candy again.
This is not to say that it would normally be impossible to be aware of having multiple trains of thought going on. Even without meditative training, your brain is constantly producing summaries of what’s happening, including summaries of what’s happening in your head. But what normally happens is something like having the first train of thought, then having the second train of thought, and then having general introspective awareness of there being two trains of thought. What does not usually happen is that the introspective awareness is sharp enough to register the fact that whenever the train of thought switches, everything else disappears from consciousness for the duration.
Rather than there being a single observer who experiences all of their own thoughts, there are three separate processes, two of them concerned with their own issues and a third meta-process keeping a loose record of what the two others have been up to.
A rough analogy would be to a (single-core) computer that keeps executing multiple different programs in succession, with the contents of the processor being cleaned out for the next program each time the execution switches. As long as everything goes smoothly, things will appear to the user as multiple different programs being executed at the same time, and the programs themselves will be unaware of the other programs. Yet, a sufficiently fine-grained trace of the different processes will reveal that only one has been running at a time. (Though unlike in this analogy, mental subsystems do keep running even when “swapped out”; they just don’t have write access to consciousness during that time.)
By developing sufficient detail, another thing that can be noticed is that the sense of self is actually only present a part of the time. As discussed in previous posts [1, 2], the experience of a self is basically a piece of data—a narrative which is sometimes experienced and sometimes not. That is, it is another high-level summary of what is happening—“I am doing this thing”—constructed from lower-level data. (In a comment, Vanessa Kosoy suggested that the experience of a self is an explanation of why the person is doing things, constructed for social purposes and to be able to justify your behavior afterwards. This sounds plausible to me.)
That means that the content of your consciousness may be something like:
Time 1: The sight of a bird outside the window.
Time 2: The thought “there’s a bird over there”.
Time 3: The experience of typing on a keyboard.
Time 4: The sound of a car outside.
Time 5: A mental image of a car.
Time 6: A sense of being someone who sees the bird and hears the car, while typing on a keyboard.
… that is, normally you may experience there being a constant, permanent self which feels like what you really are. But in fact, during a large part of your conscious experience, that sense of self may simply not be there at all. Normally this might be impossible to detect due to what’s called the refrigerator light illusion: the light in a refrigerator turns on whenever you open the door, so it seems to you to always be on. Likewise, whenever you ask “do I experience a sense of self right now”, that question references and activates a self-schema, meaning that the answer is always “yes”. It is only by developing introspective awareness that records all mental content, without needing to make reference to a self, that you can come to notice the way in which your self constantly appears and disappears.
It is worth noting that coming to experience this may feel very frightening. Psychologist and meditation teacher Ron Crouch describes one way that it can go:
What is actually happening, down deep, is that as your attention is syncing up with the dissolution of phenomena you are finding that there is nothing in experience that the sense of “me” can hold onto as stable and permanent. It just can’t get any footing. You do not realize it at a cognitive level, but you are getting a deep insight into the impermanence of all phenomena, and along with that, into the impermanence of the self. This is something that is terrifying to one’s very roots. Needles to say this initial stage can be a great source of distress and people can become stuck here for some time if they do not have good guidance.
We might think the distress follows from the mind’s underlying assumption that the self must be something like a permanent object. Whenever one has checked for the presence of the self, it has been there: thus, it is something that persists uninterrupted over time (except maybe in sleep). Now it—or something that resembles what it used to be—suddenly keeps vanishing and reappearing. Does that mean that you are dying?
Eventually, given enough further practice, the mind readjusts and revises its models. Continuity of consciousness does not mean uninterrupted continuity of self after all; the self is as impermanent as any other sensory experience. Nothing here to see, move along now.
Impermanence and unsatisfactoriness
One aspect of craving is clinging, a kind of repeated craving. The mind notices a pleasant or unpleasant sensation, and then tries to keep the pleasant sensations in consciousness and the unpleasant sensations out of consciousness. This may feel like you are trying to “freeze” the content of consciousness into a particular, pleasurable slice of experience.
In an earlier post, I gave a list of examples about craving; this is also a good list of examples to use for clinging, so I’ll repeat it here:
It is morning and your alarm bell rings. You should get up, but it feels nice to be sleepy and remain in bed. You want to hang onto those pleasant sensations of sleepiness for a little bit more.
You are spending an evening together with a loved one. This is the last occasion that you will see each other in a long time. You feel really good being with them, but a small part of you is unhappy over the fact that this evening will eventually end.
You are at work on a Friday afternoon. Your mind wanders to the thought of no longer being at work, and doing the things that you had planned to do on the weekend. You would prefer to be done with work already, and find it hard to stay focused as you cling to the thoughts of your free time.
You are single and hanging out with an attractive person. You know that they are not into you, but it would be so great if they were. You can’t stop thinking about that possibility, and this keeps distracting you from the actual conversation.
You are in a conversation with several other people. You think of a line that would be a really good response to what someone else just said. Before you can say it, somebody says a thing, and the conversation moves on. You find yourself still thinking of your line, and how nice it would have been to get to say it.
You are playing a game of chess. You see an opportunity to make a series of movies that looks like it would win the game for you. You get so focused on the sequence of moves that would bring you a victory, that you don’t notice that your opponent could also respond in a way that would ruin the entire plan.
You had been planning on going to a famous museum while on your vacation, but the museum turns out to be temporarily closed at the time. You keep thinking about how much you had been looking forward to it.
What is essentially going on, is the craving trying to fight against impermanence. Taking the example of being sleepy and in bed: there is the sensation of sleepiness and a feeling of pleasure; and that annoying thought which keeps saying that you really need to get up soon… and the craving wants that pleasant sleepiness back and stable, damnit. If only it would focus on the sleepiness enough, maybe that annoying reminder would go away...
This contributes to the loop where the mind sees craving as necessary for well-being: phenomena won’t stabilize in consciousness by themselves, and craving takes actions to make them more stable. Whenever it is unsuccessfully trying to do so, there is discomfort; when it succeeds in getting the pleasant thing to become the object of consciousness (if only for a moment), there is less discomfort (if only for a moment). Now, that discomfort is being generated by the craving itself, so it could also be eliminated by dropping the craving… but the system does not notice that.
Nor does it notice that following the craving does not lead to consistent happiness. Of course, we may intellectually understand that there’s no single thing that would make us permanently and eternally happy. But at the subsystem level, each source of craving is based on a schema that states something like:
If I get the thing I am craving, things will feel satisfying.
When the subsystem related to that goal is active, this is the schema which will be active in the person’s mind. If you are hungry for food, you only think about how food will bring relief to your discomfort. Intellectually, you may know that soon afterwards you will start wanting something else—but the assumption that your mind is operating from, is that getting the food will bring contentment. And that assumption is correct! Recall that unsatisfactoriness is actually caused by craving. So getting the food will make the craving for it go away—until the next craving pops up, which is likely to happen very soon.
This has the consequence that each individual craving may have its prediction confirmed. The craving for food correctly predicts that food will bring satisfaction from that craving. The craving to look at the phone while you eat correctly predicts that looking at the phone will bring satisfaction from that craving. The craving to go watch pictures of attractive naked people after eating correctly predicts that going to watch pictures of attractive naked people will bring satisfaction from that craving… while the overall system remains in a near-constant state of craving that just keeps changing its target.
In the paper Suffering(Metzinger, 2016), Thomas Metzinger reports on an “experience sampling” experiment, where messages were sent to people’s phones at random times, asking them whether they felt that their current experience would feel worth reliving:
For many, the result was surprising: the number of positive conscious moments per week varied between 0 and 36 [out of 70], with an average of 11.8 or almost 31 per cent of the phenomenological samples, while at 69 per cent a little more than two thirds of the moments were spontaneously ranked as not worth reliving.
Metzinger notes that one cannot generalize from these results to the general population: this was a small, unreplicated pilot study done with a highly selected group (philosophy students). But as he also notes, what is remarkable is that nearly all of the participants were surprised by their own results—they had expected many more moments to feel pleasurable. He speculates that human motivation may depend on systematic self-deception: if a person valued positive experiences but noticed that most of their experience was actually unpleasant, they might become paralyzed.
And it does seem that increased awareness of the impermanence of satisfaction helps reduce craving. I like to think of each individual craving as a form of a hypothesis, in the predictive processing sense where hypotheses drive behavior by seeking to prove themselves true. For example (Friston et al. 2012), your visual system may see someone’s nose and form the hypothesis that “the thing that I’m seeing is a nose, and a nose is part of a person’s face, so I’m seeing someone’s face”. That contains the prediction “faces have eyes next to the nose, so if I look slightly up and to the right I will see an eye, and if I look left from there I will see another eye”; it will then seek to confirm its prediction by making you look at those spots and verify that they do indeed contain eyes.
Normally, each craving is successfully proving true the hypothesis of “pursuing this craving will cause satisfaction”… but included in that prediction is not only a claim that satisfying this craving will bring momentary satisfaction. As the hypothesis is not modeling events that happen after it is satisfied, there is an implied claim that this will bring lasting satisfaction.
If the mind-system develops increased awareness of the way repeated craving seems to just lead to a constant state of discomfort, then under the right conditions it may consider the hypotheses in those cravings falsified and discard them.
Fighting against a sensation as assuming its permanence
Another thing that is happening is the subsystems failing to notice how fighting against a sensation actually helps keep it in consciousness, and how the sensation might actually fall away on its own if it was not being fought against.
Suppose that you are feeling stressed out over something, and a craving is activated to get rid of the feeling of stress. This involves sending into consciousness a plan for getting rid of the sensation of stress, which needs to make reference to the sensation of stress. This tends to redirect more attention towards the sensation of stress, strengthening the signal associated with it… and because sensations are normally impermanent and tend to easily vanish, this may help keep it in consciousness whereas it would otherwise have disappeared on its own.
In general, craving often operates under the assumption that unpleasant sensations are permanent: that is, they will persist in consciousness until actively resisted. And certainly it is true that not all unpleasant sensations will just disappear if you stop feeding them with attention. But even then, redirecting attention into a struggle against them may actively make them stronger.
If one develops sufficient introspective awareness, they may come to experience this directly. They will notice a neutral sensation, a negative sensation, another neutral sensation, then the aversion to the negative sensation… and notice there are actually quite a few neutral sensations, during which the negative sensation does not bother them at all. This helps notice that struggling against discomfort is actually not necessary for being free of discomfort; one is free of discomfort a large part of the time already.
Impermanence as vibration
No discussion of impermanence would be complete without touching upon the topic of “vibrations”. Recall that according to the predictive processing model, the brain is composed of layers prediction machinery. A given layer can receive sensory information from a lower layer, and from the higher layer predictions of what that information should look like. For purposes of prediction, each layer is trying to form models of what it expects to see.
Rather than sense information primarily “flowing up” from the sense organs, the brain keeps making guesses of what it expects to see. These expectations are sent “down to the senses”, with the brain using the sense data to check its assumptions and correcting for any mismatches. Mismatches that seem small enough may be ignored and explained away as noise.
One possible model is that the sensory information from the lower levels represents stable, permanent objects. As has been noted, this assumption is often a useful and correct one for predicting how the world behaves, so the system begins to assume it… ignoring the fact that sensory data is actually coming in pulses rather than constantly.
When one’s consciousness starts dropping some of the mental impressions that normally “fill in the gaps”, it may lead to an experiential quality of reality “vibrating”. Here is how DavidM describes this:
A meditator practicing in this style will eventually find that their experience is not static, but ‘vibrates’ or fluxes in a peculiar way over extremely short periods of time (fractions of a second). For an explanation by analogy, imagine a set of speakers playing music without dynamic variation; if a person rapidly turns the volume knob in the pattern off-low-high-low-off, the amplitude of the music will flux over time. Similarly, a meditator practicing in this style finds that the components of experience are not static, but fluctuate rapidly from nonexistent to existent and back again. N.B. This has nothing to do with the fact that the contents of experience are constantly changing. Rather, apparently static objects (e.g. an unchanging white visual field) turn out to be in flux.
For the most part, the hypothesis of “sensory data represents permanent objects” has turned out to deliver good results, so normally any gaps in the data will be automatically “filled in” by the model, as they are assumed to be meaningless noise. As a result, a “neural autocompletion feature” can create an impression of closely observing sensory data, even when sensory data is actually sparse and the impression of it is mostly fabricated on the basis of a few data points.
For as long as the sensed data deviates only a little from the expected, the deviation is treated as noise and ignored; but once the deviation crosses some critical threshold, it is picked up and registered as surprising. If one intentionally goes looking for vibrations, then one is trying to pick up finer and finer distinctions in the sense data. This forces the system to pay attention to minor patterns that would otherwise have been treated as meaningless noise. That causes it to notice discrepancies between the higher-level model’s prediction of “solid stream of sense data” and the sensory experiences that are coming in as pulses. This leads to an awareness of vibrations, and more generally insight into how the brain fills in data which is not actually there.
On the other hand, I have also heard reports of people finding vibrations without explicitly even looking at sensory details, in contexts such as doing loving-kindness meditation. I am confused about what is going on there and don’t know how to explain it. This is also an area that I have personally investigated relatively little.
Three characteristics: impermanence
This is the sixth post of the “a non-mystical explanation of the three characteristics of existence” series.
Impermanence
Like no-self and unsatisfactoriness, impermanence seems like a label for a broad cluster of related phenomena. A one-sentence description of it, phrased in experiential terms, would be that “All experienced phenomena, whether physical or mental, inner or outer, are impermanent”.
As an intellectual claim, this does not sound too surprising: few people would seriously think that either physical things or mental experiences last forever. However, there are ways in which impermanence does contradict our intuitive assumptions.
A conventional example of this is change blindness. In a typical change blindness experiment, people report having good awareness of the details of a picture shown to them: but when details are changed during an eye saccade, subjects fail to notice any difference. Maybe a person’s hat looks red, and people who have been looking right at the hat fail to notice that it looked green just a second ago: the consciousness of the green-ness has vanished, replaced entirely with red.
People are typically surprised by this, thinking that “if it was red a second ago, surely I would remember that”—a thought that implicitly assumes that sense percepts leave permanent memories behind. But as long something does not explicitly store a piece of conscious information, it is gone as soon as it has been experienced.
This is a natural consequence of the Global Neuronal Workspace (GNW) model of consciousness from neuroscience. As I have previously discussed, studies suggest that the content of consciousness corresponds to information held in a particular network of neurons called the “global workspace”. This workspace can only hold a single piece of conscious content at a time, and new information is constantly trying to enter it, replacing the old information.
Now if the content of your consciousness happens to be something like this:
0 milliseconds: Seeing a red hat
53 milliseconds: Thinking about cookies
200 milliseconds: Seeing a green hat
Then at the 200 millisecond mark, unless some memory system happened to explicitly store the fact of seeing a red hat before, no trace of it remains in consciousness for the person to compare with. One can train particular subsystems to monitor the contents of consciousness and send occasional summaries of previous contents, which is part of what investigating impermanence involves.
Compare this to meditation teacher Daniel Ingram’s description of impermanence:
In Ingram’s terms, people subconsciously assume that if a person in a picture has a red hat, then the person in the picture is going to keep having a red hat. Also, if the person in the picture has a green hat, they probably also had a green hat when you last looked at them. This kind of an assumption is often pragmatically useful, and may even be a true claim about the world, for as long as the image you are looking at is not being manipulated by researchers who keep changing subtle details. But it is not an accurate model of how your own mind functions.
Consciousness as an FBI report
In a previous article, I mentioned that according to neuroscientist Stanislas Dehaene, one of the functions of consciousness is for subsystems in the brain to exchange summaries of their conclusions. He offered the analogy of the US president being briefed by the FBI. The FBI is a vast organization, with thousands of employees: they are constantly shifting through enormous amounts of data and forming hypotheses about topics with national security relevance. But it would be useless for the FBI to present to the president every single report collected by every single field agent, as well as every analysis compiled by every single analyst in response. Rather, the FBI needs to internally settle on some overall summary of what they believe is going on, and then present that to the President, who can then act based on the information. Similarly, Dehaene suggests that consciousness is a place where different brain systems can exchange summaries of their models, and to integrate conflicting evidence in order to arrive to an overall conclusion.
In a similar way, it’s usually not necessary for the brain to keep a conscious track of every little detail in an image. Rather, sensory information comes in, and subsystems responsible for processing it broadcast a summary of what they consider important about it. If you look at a painting, a general summary of its contents will be produced and maintained in consciousness, while minor details like the color of someone’s hat won’t be recorded unless a person had a particularly important reason to look at it. (That would be the equivalent of the FBI including a field agent’s random observations in a report to the president. They’re very unlikely to include those unless they are really important.)
This is particularly noticeable when learning to draw: as Raemon discusses in Drawing Less Wrong: Observing Reality:
The relation of this to impermanence is that observing the contents of your mind lets you notice just how little sense data is actually used, and how quickly it vanishes. Returning to Daniel Ingram:
Ingram suggests that between physical sensations, there are mental sensations which “fill in the gaps”, and which prevent people from noticing that the original physical sensations only come in sporadically. As people become more adept at meditation practices such as following the breath, they may come to notice that a large part of their time has been spent on following a thought about the breath, rather than the breath itself: and far less sensory information about the breath actually comes to consciousness than they assumed.
In an earlier article on insight meditation I gave another example about these kinds of mental sensations. I mentioned a time when I was doing concentration meditation, using an app that played the sound of something hitting a woodblock, 50 times per minute. As I was concentrating on listening to the sound, I noticed that what had originally been just one thing in my experience—a discrete sound event—was actually composed of many smaller parts. The beginning and end of the sound were different, so there were actually two sound sensations; and there was a subtle visualization of something hitting something else; and a sense of motion accompanying that visualization. I had not previously even been fully aware that my mind was automatically creating a mental image of what it thought that the sound represented.
Continuing to observe those different components, I became more aware of the fact that my visualization of the sound changed over time and between meditation sessions, in a rather arbitrary way. Sometimes my mind conjured up a vision of a hammer hitting a rock in a dwarven mine; sometimes it was two wooden sticks hitting each other; sometimes it was drops of water falling on the screen of my phone.
Normally, all of this would just be packaged together into a general impression of “I’m hearing some sound”. Our raw sense data is made up of countless small details and sensations, each arising and passing away in rapid succession—but we mostly perceive the high-level summaries, which are much more static. This creates an experience of seeing solid and discrete objects, and a feeling of there being permanent objects.
So how does one actually come to see what is happening in their mind?
In The Mind Illuminated, meditation teacher and former neuroscientist John Yates (Culadasa) suggests that one way this happens is by taking the subsystems responsible for producing such summaries and directing them to produce summaries about the content of consciousness. The brain already has a subsystem that generates overall summaries of what’s going on in your mind; you can train that system to produce more detailed reports. Yates calls such summaries introspective awareness (discussed in more detail in an earlier article).
The impermanence of the self
As I have discussed, consciousness involves a constant competitive process, where different subsystems send content to the global workspace. At any given time, only one of these pieces of content is selected to become the content of consciousness. We might say that there has been a “subsystem switch” or a “subsystem swap” when the content of consciousness changes from that submitted by one subsystem to that submitted by another.
In normal circumstances, the structure of your mind is such that you cannot directly notice the different subsystems getting swapped in and out. Your consciousness can only hold one piece of information at a time. Suppose that at one moment, you are thinking of your friend, and at the next you are thinking of candy. When you think of candy, you are no longer aware of the fact that you were thinking of your friend the previous moment. You can often infer that a subsystem switch has happened, but you can’t actually experience the switch.
However, if you develop more detailed introspective awareness, the stream of your consciousness may include reports such as this:
Subsystem 1: So I was talking with my friend and she said…
Subsystem 2: Ooh, candy.
Awareness subsystem: The train of thought about my friend switched to a train of thought about candy right now.
Subjectively, this feels like becoming aware of the subsystem swapping in real time: a thought comes in, while an “afterimage” of the previous thought lingers for a brief moment, enough to make you realize that one kind of thought has replaced the other. If the trains of thought are different enough, the transitions between might feel really sharp and distinct.
You may also notice that you have two or more separate thought streams going in parallel, while having had no awareness of the fact. At one time you are thinking about your friend, and at the other time you are thinking about candy. Despite the fact that these two thought streams have kept alternating, maybe switching once every couple of seconds, they have been entirely unaware of each other. First the candy is everything that is in your mind, then your friend, then the candy again.
This is not to say that it would normally be impossible to be aware of having multiple trains of thought going on. Even without meditative training, your brain is constantly producing summaries of what’s happening, including summaries of what’s happening in your head. But what normally happens is something like having the first train of thought, then having the second train of thought, and then having general introspective awareness of there being two trains of thought. What does not usually happen is that the introspective awareness is sharp enough to register the fact that whenever the train of thought switches, everything else disappears from consciousness for the duration.
Rather than there being a single observer who experiences all of their own thoughts, there are three separate processes, two of them concerned with their own issues and a third meta-process keeping a loose record of what the two others have been up to.
A rough analogy would be to a (single-core) computer that keeps executing multiple different programs in succession, with the contents of the processor being cleaned out for the next program each time the execution switches. As long as everything goes smoothly, things will appear to the user as multiple different programs being executed at the same time, and the programs themselves will be unaware of the other programs. Yet, a sufficiently fine-grained trace of the different processes will reveal that only one has been running at a time. (Though unlike in this analogy, mental subsystems do keep running even when “swapped out”; they just don’t have write access to consciousness during that time.)
By developing sufficient detail, another thing that can be noticed is that the sense of self is actually only present a part of the time. As discussed in previous posts [1, 2], the experience of a self is basically a piece of data—a narrative which is sometimes experienced and sometimes not. That is, it is another high-level summary of what is happening—“I am doing this thing”—constructed from lower-level data. (In a comment, Vanessa Kosoy suggested that the experience of a self is an explanation of why the person is doing things, constructed for social purposes and to be able to justify your behavior afterwards. This sounds plausible to me.)
That means that the content of your consciousness may be something like:
Time 1: The sight of a bird outside the window.
Time 2: The thought “there’s a bird over there”.
Time 3: The experience of typing on a keyboard.
Time 4: The sound of a car outside.
Time 5: A mental image of a car.
Time 6: A sense of being someone who sees the bird and hears the car, while typing on a keyboard.
… that is, normally you may experience there being a constant, permanent self which feels like what you really are. But in fact, during a large part of your conscious experience, that sense of self may simply not be there at all. Normally this might be impossible to detect due to what’s called the refrigerator light illusion: the light in a refrigerator turns on whenever you open the door, so it seems to you to always be on. Likewise, whenever you ask “do I experience a sense of self right now”, that question references and activates a self-schema, meaning that the answer is always “yes”. It is only by developing introspective awareness that records all mental content, without needing to make reference to a self, that you can come to notice the way in which your self constantly appears and disappears.
It is worth noting that coming to experience this may feel very frightening. Psychologist and meditation teacher Ron Crouch describes one way that it can go:
We might think the distress follows from the mind’s underlying assumption that the self must be something like a permanent object. Whenever one has checked for the presence of the self, it has been there: thus, it is something that persists uninterrupted over time (except maybe in sleep). Now it—or something that resembles what it used to be—suddenly keeps vanishing and reappearing. Does that mean that you are dying?
Eventually, given enough further practice, the mind readjusts and revises its models. Continuity of consciousness does not mean uninterrupted continuity of self after all; the self is as impermanent as any other sensory experience. Nothing here to see, move along now.
Impermanence and unsatisfactoriness
One aspect of craving is clinging, a kind of repeated craving. The mind notices a pleasant or unpleasant sensation, and then tries to keep the pleasant sensations in consciousness and the unpleasant sensations out of consciousness. This may feel like you are trying to “freeze” the content of consciousness into a particular, pleasurable slice of experience.
In an earlier post, I gave a list of examples about craving; this is also a good list of examples to use for clinging, so I’ll repeat it here:
It is morning and your alarm bell rings. You should get up, but it feels nice to be sleepy and remain in bed. You want to hang onto those pleasant sensations of sleepiness for a little bit more.
You are spending an evening together with a loved one. This is the last occasion that you will see each other in a long time. You feel really good being with them, but a small part of you is unhappy over the fact that this evening will eventually end.
You are at work on a Friday afternoon. Your mind wanders to the thought of no longer being at work, and doing the things that you had planned to do on the weekend. You would prefer to be done with work already, and find it hard to stay focused as you cling to the thoughts of your free time.
You are single and hanging out with an attractive person. You know that they are not into you, but it would be so great if they were. You can’t stop thinking about that possibility, and this keeps distracting you from the actual conversation.
You are in a conversation with several other people. You think of a line that would be a really good response to what someone else just said. Before you can say it, somebody says a thing, and the conversation moves on. You find yourself still thinking of your line, and how nice it would have been to get to say it.
You are playing a game of chess. You see an opportunity to make a series of movies that looks like it would win the game for you. You get so focused on the sequence of moves that would bring you a victory, that you don’t notice that your opponent could also respond in a way that would ruin the entire plan.
You had been planning on going to a famous museum while on your vacation, but the museum turns out to be temporarily closed at the time. You keep thinking about how much you had been looking forward to it.
What is essentially going on, is the craving trying to fight against impermanence. Taking the example of being sleepy and in bed: there is the sensation of sleepiness and a feeling of pleasure; and that annoying thought which keeps saying that you really need to get up soon… and the craving wants that pleasant sleepiness back and stable, damnit. If only it would focus on the sleepiness enough, maybe that annoying reminder would go away...
This contributes to the loop where the mind sees craving as necessary for well-being: phenomena won’t stabilize in consciousness by themselves, and craving takes actions to make them more stable. Whenever it is unsuccessfully trying to do so, there is discomfort; when it succeeds in getting the pleasant thing to become the object of consciousness (if only for a moment), there is less discomfort (if only for a moment). Now, that discomfort is being generated by the craving itself, so it could also be eliminated by dropping the craving… but the system does not notice that.
Nor does it notice that following the craving does not lead to consistent happiness. Of course, we may intellectually understand that there’s no single thing that would make us permanently and eternally happy. But at the subsystem level, each source of craving is based on a schema that states something like:
If I get the thing I am craving, things will feel satisfying.
When the subsystem related to that goal is active, this is the schema which will be active in the person’s mind. If you are hungry for food, you only think about how food will bring relief to your discomfort. Intellectually, you may know that soon afterwards you will start wanting something else—but the assumption that your mind is operating from, is that getting the food will bring contentment. And that assumption is correct! Recall that unsatisfactoriness is actually caused by craving. So getting the food will make the craving for it go away—until the next craving pops up, which is likely to happen very soon.
This has the consequence that each individual craving may have its prediction confirmed. The craving for food correctly predicts that food will bring satisfaction from that craving. The craving to look at the phone while you eat correctly predicts that looking at the phone will bring satisfaction from that craving. The craving to go watch pictures of attractive naked people after eating correctly predicts that going to watch pictures of attractive naked people will bring satisfaction from that craving… while the overall system remains in a near-constant state of craving that just keeps changing its target.
In the paper Suffering (Metzinger, 2016), Thomas Metzinger reports on an “experience sampling” experiment, where messages were sent to people’s phones at random times, asking them whether they felt that their current experience would feel worth reliving:
Metzinger notes that one cannot generalize from these results to the general population: this was a small, unreplicated pilot study done with a highly selected group (philosophy students). But as he also notes, what is remarkable is that nearly all of the participants were surprised by their own results—they had expected many more moments to feel pleasurable. He speculates that human motivation may depend on systematic self-deception: if a person valued positive experiences but noticed that most of their experience was actually unpleasant, they might become paralyzed.
And it does seem that increased awareness of the impermanence of satisfaction helps reduce craving. I like to think of each individual craving as a form of a hypothesis, in the predictive processing sense where hypotheses drive behavior by seeking to prove themselves true. For example (Friston et al. 2012), your visual system may see someone’s nose and form the hypothesis that “the thing that I’m seeing is a nose, and a nose is part of a person’s face, so I’m seeing someone’s face”. That contains the prediction “faces have eyes next to the nose, so if I look slightly up and to the right I will see an eye, and if I look left from there I will see another eye”; it will then seek to confirm its prediction by making you look at those spots and verify that they do indeed contain eyes.
Eye movements seeking to confirm the hypotheses of “I am seeing a face”. From Friston, Adams, Perrinet & Breakspear 2012.
Normally, each craving is successfully proving true the hypothesis of “pursuing this craving will cause satisfaction”… but included in that prediction is not only a claim that satisfying this craving will bring momentary satisfaction. As the hypothesis is not modeling events that happen after it is satisfied, there is an implied claim that this will bring lasting satisfaction.
If the mind-system develops increased awareness of the way repeated craving seems to just lead to a constant state of discomfort, then under the right conditions it may consider the hypotheses in those cravings falsified and discard them.
Fighting against a sensation as assuming its permanence
Another thing that is happening is the subsystems failing to notice how fighting against a sensation actually helps keep it in consciousness, and how the sensation might actually fall away on its own if it was not being fought against.
Suppose that you are feeling stressed out over something, and a craving is activated to get rid of the feeling of stress. This involves sending into consciousness a plan for getting rid of the sensation of stress, which needs to make reference to the sensation of stress. This tends to redirect more attention towards the sensation of stress, strengthening the signal associated with it… and because sensations are normally impermanent and tend to easily vanish, this may help keep it in consciousness whereas it would otherwise have disappeared on its own.
In general, craving often operates under the assumption that unpleasant sensations are permanent: that is, they will persist in consciousness until actively resisted. And certainly it is true that not all unpleasant sensations will just disappear if you stop feeding them with attention. But even then, redirecting attention into a struggle against them may actively make them stronger.
If one develops sufficient introspective awareness, they may come to experience this directly. They will notice a neutral sensation, a negative sensation, another neutral sensation, then the aversion to the negative sensation… and notice there are actually quite a few neutral sensations, during which the negative sensation does not bother them at all. This helps notice that struggling against discomfort is actually not necessary for being free of discomfort; one is free of discomfort a large part of the time already.
Impermanence as vibration
No discussion of impermanence would be complete without touching upon the topic of “vibrations”. Recall that according to the predictive processing model, the brain is composed of layers prediction machinery. A given layer can receive sensory information from a lower layer, and from the higher layer predictions of what that information should look like. For purposes of prediction, each layer is trying to form models of what it expects to see.
Rather than sense information primarily “flowing up” from the sense organs, the brain keeps making guesses of what it expects to see. These expectations are sent “down to the senses”, with the brain using the sense data to check its assumptions and correcting for any mismatches. Mismatches that seem small enough may be ignored and explained away as noise.
One possible model is that the sensory information from the lower levels represents stable, permanent objects. As has been noted, this assumption is often a useful and correct one for predicting how the world behaves, so the system begins to assume it… ignoring the fact that sensory data is actually coming in pulses rather than constantly.
When one’s consciousness starts dropping some of the mental impressions that normally “fill in the gaps”, it may lead to an experiential quality of reality “vibrating”. Here is how DavidM describes this:
For the most part, the hypothesis of “sensory data represents permanent objects” has turned out to deliver good results, so normally any gaps in the data will be automatically “filled in” by the model, as they are assumed to be meaningless noise. As a result, a “neural autocompletion feature” can create an impression of closely observing sensory data, even when sensory data is actually sparse and the impression of it is mostly fabricated on the basis of a few data points.
For as long as the sensed data deviates only a little from the expected, the deviation is treated as noise and ignored; but once the deviation crosses some critical threshold, it is picked up and registered as surprising. If one intentionally goes looking for vibrations, then one is trying to pick up finer and finer distinctions in the sense data. This forces the system to pay attention to minor patterns that would otherwise have been treated as meaningless noise. That causes it to notice discrepancies between the higher-level model’s prediction of “solid stream of sense data” and the sensory experiences that are coming in as pulses. This leads to an awareness of vibrations, and more generally insight into how the brain fills in data which is not actually there.
On the other hand, I have also heard reports of people finding vibrations without explicitly even looking at sensory details, in contexts such as doing loving-kindness meditation. I am confused about what is going on there and don’t know how to explain it. This is also an area that I have personally investigated relatively little.
This is the sixth post of the “a non-mystical explanation of the three characteristics of existence” series.