Subagents, introspective awareness, and blending

In this post, I ex­tend the model of mind that I’ve been build­ing up in pre­vi­ous posts to ex­plain some things about change blind­ness, not know­ing whether you are con­scious, for­get­ting most of your thoughts, and mis­tak­ing your thoughts and emo­tions as ob­jec­tive facts, while also con­nect­ing it with the the­ory in the med­i­ta­tion book The Mind Illu­mi­nated. (If you didn’t read my pre­vi­ous posts, this ar­ti­cle has been writ­ten to also work as a stand-alone piece.)

The Mind Illu­mi­nated (Ama­zon, SSC re­view), or TMI for short, pre­sents what it calls the mo­ments of con­scious­ness model. Ac­cord­ing to this model, our stream of con­scious­ness con­sists of a se­ries of dis­crete mo­ments, each a men­tal ob­ject. Un­der this model, there are always differ­ent “sub­minds” which are pro­ject­ing men­tal ob­jects into con­scious­ness. At differ­ent mo­ments, differ­ent men­tal ob­jects get se­lected as the con­tent of con­scious­ness.

If you’ve read some of the pre­vi­ous posts in this se­quence, you may rec­og­nize this as sound­ing fa­mil­iar. We started by dis­cussing some of the neu­ro­science re­search on con­scious­ness. There we cov­ered the GWT/​GNW the­ory of con­scious­ness be­ing a “workspace” in the brain that differ­ent brain sys­tems pro­ject in­for­ma­tion into, and which al­lows them to syn­chro­nize their pro­cess­ing around a sin­gle piece of in­for­ma­tion. In the next post, we dis­cussed the psy­chother­apy model of In­ter­nal Fam­ily Sys­tems, which also con­ceives the mind of be­ing com­posed of differ­ent parts, many of which are try­ing to ac­com­plish var­i­ous aims by com­pet­ing to pro­ject var­i­ous men­tal ob­jects into con­scious­ness. (TMI talks about sub­minds, IFS talks about parts, GWT/​GNW just talks about differ­ent parts of the brain; for con­sis­tency’s sake, I will just use “sub­agent” in the rest of this post.)

At this point, we might want to look at some crit­i­cisms of this kind of a frame­work. Su­san Black­more has writ­ten an in­ter­est­ing pa­per called “There is no stream of con­scious­ness”. She has sev­eral ex­am­ples for why we should re­ject the no­tion of any stream of con­scious­ness. For in­stance, this one:

For many years now I have been get­ting my stu­dents to ask them­selves, as many times as pos­si­ble ev­ery day “Am I con­scious now?”. Typ­i­cally they find the task un­ex­pect­edly hard to do; and hard to re­mem­ber to do. But when they do it, it has some very odd effects. First they of­ten re­port that they always seem to be con­scious when they ask the ques­tion but be­come less and less sure about whether they were con­scious a mo­ment be­fore. With more prac­tice they say that ask­ing the ques­tion it­self makes them more con­scious, and that they can ex­tend this con­scious­ness from a few sec­onds to per­haps a minute or two. What does this say about con­scious­ness the rest of the time?
Just this start­ing ex­er­cise (we go on to var­i­ous elab­o­ra­tions of it as the course pro­gresses) be­gins to change many stu­dents’ as­sump­tions about their own ex­pe­rience. In par­tic­u­lar they be­come less sure that there are always con­tents in their stream of con­scious­ness. How does it seem to you? It is worth de­cid­ing at the out­set be­cause this is what I am go­ing to deny. I sug­gest that there is no stream of con­scious­ness. [...]
I want to re­place our fa­mil­iar idea of a stream of con­scious­ness with that of illu­sory back­wards streams. At any time in the brain a whole lot of differ­ent things are go­ing on. None of these is ei­ther ‘in’ or ‘out’ of con­scious­ness, so we don’t need to ex­plain the ‘differ­ence’ be­tween con­scious and un­con­scious pro­cess­ing. Every so of­ten some­thing hap­pens to cre­ate what seems to have been a stream. For ex­am­ple, we ask “Am I con­scious now?”. At this point a ret­ro­spec­tive story is con­cocted about what was in the stream of con­scious­ness a mo­ment be­fore, to­gether with a self who was ap­par­ently ex­pe­rienc­ing it. Of course there was nei­ther a con­scious self nor a stream, but it now seems as though there was. This pro­cess goes on all the time with new sto­ries be­ing con­cocted when­ever re­quired. At any time that we bother to look, or ask our­selves about it, it seems as though there is a stream of con­scious­ness go­ing on. When we don’t bother to ask, or to look, it doesn’t, but then we don’t no­tice so it doesn’t mat­ter. This way the grand illu­sion is con­cocted.

This is an in­ter­est­ing ar­gu­ment. A similar ex­am­ple might be that when I first started do­ing some­thing like track-back med­i­ta­tion when on walks, check­ing what was in my mind just a sec­ond ago. I was sur­prised at just how many thoughts I would have while on a walk, that I would usu­ally just to­tally for­get about af­ter­wards, and come back home hav­ing no rec­ol­lec­tion of 95% of them. This seems similar to Black­more’s “was I con­scious just now” ques­tion, in that when I started to check back the con­tents of my mind just a few sec­onds ago, I was fre­quently sur­prised by what I found out. (And yes, I’ve tried the “was I con­scious just now” ques­tion as well, with similar re­sults as Black­more’s stu­dents.)

Another ex­am­ple that Black­more cites is change blind­ness. When peo­ple are shown an image, it’s of­ten pos­si­ble to in­tro­duce un­no­ticed ma­jor changes into the image, as long as peo­ple are not look­ing at the very lo­ca­tion of the change when it’s made. Black­more also in­ter­prets this as well to mean that there is no stream of con­scious­ness—we aren’t ac­tu­ally build­ing up a de­tailed vi­sual model of our en­vi­ron­ment, which we would then ex­pe­rience in our con­scious­ness.

One might sum­ma­rize this class of ob­jec­tions as some­thing like, “stream-of-con­scious­ness the­o­ries as­sume that there is a con­scious stream of men­tal ob­jects in our minds that we are aware of. How­ever, upon in­ves­ti­ga­tion it of­ten be­comes ap­par­ent that we haven’t been aware of some­thing that was sup­pos­edly in our stream of con­scious­ness. In change blind­ness ex­per­i­ments we weren’t aware of what the changed de­tail ac­tu­ally was pre-change, and more gen­er­ally we don’t even have clear aware­ness of whether we were con­scious a mo­ment ago.”

But on the other hand, as we re­viewed ear­lier, there still seem to be ob­jec­tive ex­per­i­ments which es­tab­lish the ex­is­tence of some­thing like a “con­scious­ness”, which holds ap­prox­i­mately one men­tal ob­ject at a time.

So I would in­ter­pret Black­more’s find­ings differ­ently. I agree that an­swers to ques­tions like “am I con­scious right now” are con­structed some­what on the spot, in re­sponse to the ques­tion. But I don’t think that we need to re­ject hav­ing a stream of con­scious­ness be­cause of that. I think that you can be aware of some­thing, with­out be­ing aware of the fact that you were aware of it.

Robots again

For ex­am­ple, let’s look at a robot that has some­thing like a global con­scious­ness workspace. Here are the con­tents of its con­scious­ness on five suc­ces­sive timesteps:

  1. It’s rain­ing outside

  2. Bat­tery low

  3. Tech­nolog­i­cal un­em­ploy­ment protestors are outside

  4. Bat­tery low

  5. I’m now recharg­ing my battery

No­tice that at the first timestep, the robot was aware of the fact that it was rain­ing out­side; this was the fact be­ing broad­cast from con­scious­ness to all sub­sys­tems. But at no later timestep was it con­scious of the fact that at the first timestep, it had been aware of it rain­ing out­side. As­sum­ing that no sub­agent hap­pened to save this spe­cific piece of in­for­ma­tion, then all knowl­edge of it was lost as soon as the con­tent of the con­scious­ness workspace changed.

But sup­pose that there is some sub­agent which hap­pens to keep track of what has been hap­pen­ing in con­scious­ness. In that case it may choose to make its mem­ory of pre­vi­ous mind-states con­sciously available:

6. At time 1, there was the thought that [It’s rain­ing out­side]

Now there is a men­tal ob­ject in the robot’s con­scious­ness, which en­codes not only the ob­ser­va­tion of it rain­ing out­side be­fore, but also the fact that the sys­tem was think­ing of this be­fore. That knowl­edge may then have fur­ther effects on the sys­tem—for ex­am­ple, when I be­came aware of how much time I spent on use­less ru­mi­na­tion while on walks, I got frus­trated. And this seems to have con­tributed to mak­ing me ru­mi­nate less: as the sys­tem’s ac­tions and their over­all effect were metacog­ni­tively rep­re­sented and made available for the sys­tem’s de­ci­sion-mak­ing, this had the effect of the sys­tem ad­just­ing its be­hav­ior to tune down ac­tivity that was deemed use­less.

The Mind Illu­mi­nated calls this in­tro­spec­tive aware­ness. Mo­ments of in­tro­spec­tive aware­ness are sum­maries of the sys­tem’s pre­vi­ous men­tal ac­tivity, with there be­ing a ded­i­cated sub­agent with the task of prepar­ing and out­putting such sum­maries. Usu­ally it will only fo­cus on track­ing the spe­cific kinds of men­tal states which seem im­por­tant to track.

So if we ask our­selves “was I con­scious just now” for the first time, that might cause the agent to out­put some rep­re­sen­ta­tion of the pre­vi­ous state we were in. But it doesn’t have ex­pe­rience in an­swer­ing this ques­tion, and if it’s any­thing like most hu­man mem­ory sys­tems, it needs to have some kind of a con­crete ex­am­ple of what ex­actly it is look­ing for. The first time we ask it, the sub­agent’s pat­tern-matcher knows that the sys­tem is pre­sum­ably con­scious at this in­stant, so it should be look­ing for some fea­ture in our pre­vi­ous ex­pe­riences which some­how re­sem­bles this mo­ment, but it’s not quite sure of which one. And an in­tro­spec­tive mind state, is likely to be differ­ent from the less in­tro­spec­tive mind state that we were in a mo­ment ago.

This has the re­sult that on the first few times when it’s asked, the sub­agent may pro­duce an un­cer­tain an­swer: it’s ba­si­cally ask­ing its mem­ory store “do my pre­vi­ous mind states re­sem­ble this one as judged by some un­clear crite­ria”, which is ob­vi­ously hard to an­swer.

With time, op­er­at­ing from the as­sump­tion that the sys­tem is cur­rently con­scious, the sub­agent may learn to find more con­nec­tions be­tween the cur­rent mo­ment and past ones that it still hap­pens to have in mem­ory. Then it will re­port those as con­scious­ness, and likely also fo­cus more at­ten­tion on as­pects of the cur­rent ex­pe­rience which it has learned to con­sider “con­scious”. This would match Black­more’s re­port of “with more prac­tice [the stu­dents] say that ask­ing the ques­tion it­self makes them more con­scious, and that they can ex­tend this con­scious­ness from a few sec­onds to per­haps a minute or two”.

Similarly, this ex­plains me not be­ing aware of most of my thoughts, as well as change blind­ness. I had a stream of thoughts, but be­cause I had not been prac­tic­ing in­tro­spec­tive aware­ness, there were no mo­ments of in­tro­spec­tive aware­ness mak­ing me aware of hav­ing had these thoughts. Though I was aware of the thoughts at the time, this was never re-pre­sented in a way that would have left a mem­ory trace.

In change blind­ness ex­per­i­ments, peo­ple might look at the same spot in a pic­ture twice. Although they did see the con­tents of that spot at time 1 and were aware of them, that mem­ory was never stored any­where. When at time 2 they looked at the same spot and it was differ­ent, the lack of an aware­ness of what they saw pre­vi­ously means that they don’t no­tice the change.

In­tro­spec­tive aware­ness will be an im­por­tant con­cept in my fu­ture posts. (Abram Dem­ski also wrote a pre­vi­ous post on Track-Back Med­i­ta­tion, which is ba­si­cally an ex­er­cise for in­tro­spec­tive aware­ness.) To­day, I’m go­ing to talk about its re­la­tion to a con­cept I’ve talked about be­fore: blend­ing /​ cog­ni­tive fu­sion.

Blending

I’ve pre­vi­ously dis­cussed “cog­ni­tive fu­sion”, as what hap­pens when the con­tent of a thought or emo­tion is ex­pe­rienced as an ob­jec­tive truth rather than a men­tal con­struct. For in­stance, you get an­gry at some­one, and the emo­tion makes you ex­pe­rience them as a hor­rible per­son—and in the mo­ment this seems just true to you, rather than be­ing an in­ter­pre­ta­tion cre­ated by your emo­tional re­ac­tion.

You can also fuse with more log­i­cal-type be­liefs—or for that mat­ter any be­liefs—when you just treat them as un­ques­tioned truths, with­out re­mem­ber­ing the pos­si­bil­ity that they might be wrong. In my pre­vi­ous post, I sug­gested that many forms of med­i­ta­tion were train­ing the skill of in­ten­tional cog­ni­tive de­fu­sion, but I didn’t ex­plain how ex­actly med­i­ta­tion lets you get bet­ter at de­fu­sion.

In my post about In­ter­nal Fam­ily Sys­tems, I men­tioned that IFS uses the term “blend­ing” for when a sub­agent is send­ing emo­tional con­tent to your con­scious­ness, and sug­gested that IFS’s un­blend­ing tech­niques worked by as­so­ci­at­ing ex­tra con­tent around those thoughts and emo­tions, al­low­ing you to rec­og­nize them as men­tal ob­jects. For in­stance, you might no­tice sen­sa­tions in your body that were as­so­ci­ated with the emo­tion, and let your mind gen­er­ate a men­tal image of what the phys­i­cal form of those sen­sa­tions might look like. Then this set of emo­tions, thoughts, sen­sa­tions, and vi­sual images be­comes “pack­aged to­gether” in your mind, un­am­bigu­ously des­ig­nat­ing it as a men­tal ob­ject.

My cur­rent model is that med­i­ta­tion works similarly, only us­ing mo­ments of in­tro­spec­tive aware­ness as the “ex­tra wrap­per”. Sup­pose again that you are a robot, and the con­tents of your con­scious­ness is:

  1. It’s rain­ing out­side.

Note that this men­tal ob­ject is ba­si­cally be­ing taken as an ax­io­matic truth: what is in your con­scious­ness, is that it is rain­ing out­side.

On the other hand, sup­pose that your con­scious­ness con­tains this:

  1. Sen­sor 62 is re­port­ing that [it’s rain­ing out­side].

Now the men­tal ob­ject in your con­scious­ness con­tains the ori­gin of the be­lief that it’s rain­ing. The in­for­ma­tion is made available to var­i­ous sub­agents which have other be­liefs. E.g. a sub­agent hold­ing knowl­edge about sen­sors, might upon see­ing this men­tal ob­ject, rec­og­nize the refer­ence to “sen­sor 62” and out­put its es­ti­mate of that sen­sor’s re­li­a­bil­ity. The pre­vi­ous two men­tal ob­jects could then be com­bined by a third sub­agent:

  1. (Subagent A:) Sen­sor 62 is re­port­ing that [it is rain­ing out­side]

  2. (Subagent B:) Read­ings from sen­sor 62 are re­li­able 38% of the time.

  3. (Subagent C:) It is rain­ing out­side with a 38% prob­a­bil­ity.

In my dis­cus­sion of Con­scious­ness and the Brain, I noted that one of the pro­posed func­tions of con­scious­ness is to act as a pro­duc­tion sys­tem, where many differ­ent sub­agents may iden­tify par­tic­u­lar men­tal ob­jects and then ap­ply var­i­ous rules to trans­form the con­tents of con­scious­ness as a re­sult. What I’ve sketched above is ex­actly a se­quence of pro­duc­tion rules: at e.g. step 2, some­thing like the rule “if sen­sor 62 is men­tioned as an in­for­ma­tion source, out­put the cur­rent best es­ti­mate of sen­sor 62’s re­li­a­bil­ity” is ap­plied by sub­agent B. Then at the third timestep, an­other sub­agent com­bines the ob­ser­va­tions from the pre­vi­ous two timesteps, and sends that into con­scious­ness.

What was im­por­tant was that the sys­tem was not rep­re­sent­ing the out­side weather just as an ax­io­matic state­ment, but rather it was ex­plic­itly rep­re­sent­ing it as a fal­lible piece of in­for­ma­tion with a par­tic­u­lar source.

Here’s some­thing similar:

  1. I am a bad per­son.

  2. At t1, there was the thought that [I am a bad per­son].

Here, the mo­ment of aware­ness is high­light­ing the na­ture of the pre­vi­ous thought as a thought, thus caus­ing the sys­tem to treat it as such. If you used in­tro­spec­tive aware­ness for un­blend­ing, it might go some­thing like this:

  1. Blend­ing: you are ex­pe­rienc­ing ev­ery­thing that a sub­agent out­puts as true. In this situ­a­tion, there’s no in­tro­spec­tive aware­ness that would high­light those out­puts as be­ing just thoughts. “My friend is a hor­rible per­son”, feels like a fact about the world.

  2. Par­tial blend­ing: you re­al­ize that the thoughts which you have might not be en­tirely true, but you still feel them emo­tion­ally and might end up act­ing ac­cord­ingly. In this situ­a­tion, there are mo­ments of in­tro­spec­tive aware­ness, but there are also enough of the origi­nal “un­marked” thoughts to also be af­fect­ing your other sub­agents. You feel hos­tile to­wards your friend, and re­al­ize that this may not be ra­tio­nally war­ranted, but still end up talk­ing in an an­gry tone and maybe say­ing things you shouldn’t.

  3. Un­blend­ing: all or nearly all of the thoughts com­ing from a sub­agent are be­ing filtered through a mechanism that wraps them in­side mo­ments of in­tro­spec­tive aware­ness, such as “this sub­agent is think­ing that X”. You know that you have a sub­agent which has this opinion, but none of the other sub­agents are treat­ing it as a proven fact.

By train­ing your mind to have more in­tro­spec­tive mo­ments of aware­ness, you will be­come ca­pa­ble of per­ceiv­ing more and more men­tal ob­jects as just that. A clas­sic ex­am­ple would be all those mind­ful­ness ex­er­cises where you stop iden­ti­fy­ing with the con­tent of a thought, and see it as some­thing sep­a­rate from your­self. At more ad­vanced lev­els, even men­tal ob­jects which build up sen­sa­tions such as those which make up the ex­pe­rience of a self may be seen as just con­structed men­tal ob­jects.

In my next two ar­ti­cles, I’m go­ing to dis­cuss two par­tic­u­lar things that an in­creased in­tro­spec­tive aware­ness may be used for: unifi­ca­tion of mind and an un­der­stand­ing of the so-called marks of ex­is­tence.