Book Summary: Consciousness and the Brain

One of the fun­da­men­tal build­ing blocks of much of con­scious­ness re­search, is that of Global Workspace The­ory (GWT). One elab­o­ra­tion of GWT, which fo­cuses on how it might be im­ple­mented in the brain, is the Global Neu­ronal Workspace (GNW) model in neu­ro­science. Con­scious­ness and the Brain is a 2014 book that sum­ma­rizes some of the re­search and ba­sic ideas be­hind GNW. It was writ­ten by Stanis­las De­haene, a French cog­ni­tive neu­ro­scien­tist with a long back­ground in both con­scious­ness re­search and other re­lated top­ics.

The book and its replicability

Given that this is a book on psy­chol­ogy and neu­ro­science that was writ­ten be­fore the repli­ca­tion crisis, an obli­ga­tory ques­tion be­fore we get to the meat of it is: how re­li­able are any of the claims in this book? After all, if we think that this is based on re­search which is prob­a­bly not go­ing to repli­cate, then we shouldn’t even bother read­ing the book.

I think that the book’s con­clu­sions are at least rea­son­ably re­li­able in their broad strokes, if not nec­es­sar­ily all the par­tic­u­lar de­tails. That is, some of the de­tails in the cited ex­per­i­ments may be off, but I ex­pect most of them to at least be point­ing in the right di­rec­tion. Here are my rea­sons:

First, sci­en­tists in a field usu­ally have an in­for­mal hunch of how re­li­able the differ­ent re­sults are. Even be­fore the repli­ca­tion crisis hit, I had heard pri­vate com­ments from friends work­ing in so­cial psy­chol­ogy, who were say­ing that ev­ery­thing in the field was built on shaky foun­da­tions and how they didn’t trust even their own find­ings much. In con­trast, when I asked a friend who works with some peo­ple do­ing con­scious­ness re­search, he re­ported back that they gen­er­ally felt that GWT/​GNW-style the­o­ries have a rea­son­ably firm ba­sis. This isn’t ter­ribly con­clu­sive but at least it’s a bit of ev­i­dence.

Se­cond, for some ex­per­i­ments the book ex­plic­itly men­tions that they have been repli­cated. That said, some of the re­ported ex­per­i­ments seemed to be one-off ones, and I did not yet in­ves­ti­gate the de­tails of the claimed repli­ca­tions.

Third, this is a work of cog­ni­tive neu­ro­science. Cog­ni­tive neu­ro­science is gen­er­ally con­sid­ered a sub­field of cog­ni­tive psy­chol­ogy, and cog­ni­tive psy­chol­ogy is the part of psy­chol­ogy whose re­sults have so far repli­cated the best. One re­cent study tested nine key find­ings from cog­ni­tive psy­chol­ogy, and found that they all repli­cated. The 2015 “Es­ti­mat­ing the re­pro­ducibil­ity of Psy­cholog­i­cal Science” study, man­aged to repli­cate 50% of re­cent re­sults in cog­ni­tive psy­chol­ogy, as op­posed to 25% of re­sults in so­cial psy­chol­ogy. (If 50% sounds low, re­mem­ber that we should ex­pect some true re­sults to also fail a sin­gle repli­ca­tion, so a 50% repli­ca­tion rate doesn’t im­ply that 50% of the re­sults would be false. Also, a field with a 90% repli­ca­tion rate would prob­a­bly be too con­ser­va­tive in choos­ing which ex­per­i­ments to try.) Cog­ni­tive psy­chol­ogy repli­cat­ing pretty well is prob­a­bly be­cause it deals with phe­nom­ena which are much eas­ier to rigor­ously define and test than so­cial psy­chol­ogy does, so in that re­gard it’s closer to physics than it is to so­cial psy­chol­ogy.

On sev­eral oc­ca­sions, the book re­ports some­thing like “peo­ple did an ex­per­i­ment X, but then some­one ques­tioned whether the re­sults of that ex­per­i­ment re­ally sup­ported the hy­poth­e­sis in ques­tion or not, so an ex­per­i­ment X+Y was done that re­peated X but also tested Y, to help dis­t­in­guish be­tween two pos­si­ble in­ter­pre­ta­tions of X”. The gen­eral vibe that I get from the book is that differ­ent peo­ple have differ­ent in­tu­itions about how con­scious­ness works, and when some­one re­ports a re­sult that con­tra­dicts the in­tu­itions of other re­searchers, those other re­searchers are go­ing to pro­pose an al­ter­na­tive in­ter­pre­ta­tion that saves their origi­nal in­tu­ition. Then peo­ple keep do­ing more ex­per­i­ments un­til at least one of the in­tu­itions is con­clu­sively dis­proven—repli­cat­ing the origi­nal ex­per­i­ments in the pro­cess.

The anal­y­sis of the gen­eral re­li­a­bil­ity of cog­ni­tive psy­chol­ogy is some­what com­pli­cated by the fact that these find­ings are not pure cog­ni­tive psy­chol­ogy, but rather cog­ni­tive neu­ro­science. Neu­ro­science is some­what more re­moved from just re­port­ing ob­jec­tive find­ings, since the statis­ti­cal mod­els used for an­a­lyz­ing the find­ings can be flawed. I’ve seen var­i­ous claims about the prob­lems with statis­ti­cal tools in neu­ro­science, but I haven’t re­ally dug enough into the field to say to what ex­tent those are a gen­uine prob­lem.

As sug­ges­tive ev­i­dence, a lec­turer who teaches a “How re­li­able is cog­ni­tive neu­ro­science?” course re­ports that be­fore tak­ing a re­cent iter­a­tion of the course, the ma­jor­ity of stu­dents an­swered the ques­tion “If you read about a find­ing that has been demon­strated across mul­ti­ple pa­pers in mul­ti­ple jour­nals by mul­ti­ple au­thors, how likely do you think that find­ing is to be re­li­able?” as “Ex­tremely likely” and some “Moder­ately likely”. After tak­ing the course, “Moder­ately likely” be­came the most com­mon re­sponse with a lit­tle un­der half of the re­sponses, fol­lowed by “Slightly likely” by around a quar­ter of re­sponses and “Ex­tremely likely” with a lit­tle over 10% of the re­sponses. Based on this, we might con­clude that cog­ni­tive neu­ro­science is mod­er­ately re­li­able, at least as judged by MSc stu­dents who’ve just spent time read­ing and dis­cussing lots of pa­pers crit­i­cal of cog­ni­tive neu­ro­science.

One thing that’s worth not­ing is that many of the ex­per­i­ments, in­clud­ing many of the ones this book is re­port­ing on, in­clude two com­po­nents: a be­hav­ioral com­po­nent and a neu­roimag­ing com­po­nent. If the statis­ti­cal mod­els used for in­ter­pret­ing the brain imag­ing re­sults were flawed, you might get an in­cor­rect im­pres­sion of what was hap­pen­ing in the brain, but the be­hav­ioral re­sults would still be valid. If you’re max­i­mally skep­ti­cal of neu­ro­science, you could choose to throw all of the “in­side the brain” re­sults from the book away, and only look at the be­hav­ioral re­sults. That seems too con­ser­va­tive to me, but it’s an op­tion. Sev­eral of the ex­per­i­ments in the book also use ei­ther EEG or sin­gle-unit record­ings rather than neu­roimag­ing ones; these are much older and sim­pler tech­niques than brain imag­ing is, so are eas­ier to an­a­lyze re­li­ably.

So over­all, I would ex­pect that the broad strokes of what’s claimed in the book are rea­son­ably cor­rect, even if some of the de­tails might be off.

Defin­ing consciousness

Given that con­scious­ness is a term loaded with many differ­ent in­ter­pre­ta­tions, De­haene rea­son­ably starts out by ex­plain­ing what he means by con­scious­ness. He dis­t­in­guishes be­tween three differ­ent terms:

  • Vigilance: whether we “are con­scious” in the sense of be­ing awake vs. asleep.

  • At­ten­tion: hav­ing fo­cused our men­tal re­sources on a spe­cific piece of in­for­ma­tion.

  • Con­scious ac­cess: some of the in­for­ma­tion we were fo­cus­ing on, en­ter­ing our aware­ness and be­com­ing re­portable to oth­ers.

For in­stance, we might be awake (that is, vigilant) and star­ing hard at a com­puter screen, wait­ing for some image to be dis­played. When that image does get dis­played, our at­ten­tion will be on it. But it might flash too quickly for us to re­port what it looked like, or even for us to re­al­ize that it was on the screen in the first place. If so, we don’t have con­scious ac­cess to the thing that we just saw. Whereas if it had been shown for a longer time, we would have con­scious ac­cess to it.

De­haene says that when he’s talk­ing about con­scious­ness, he’s talk­ing about con­scious ac­cess, and also that he doesn’t par­tic­u­larly care to de­bate philos­o­phy and whether this is re­ally the con­scious­ness. Rather, since we have a clearly-defined thing which we can in­ves­ti­gate us­ing sci­en­tific meth­ods, we should just do that, and then think about philos­o­phy once we bet­ter un­der­stand the em­piri­cal side of things.

It seems cor­rect to say that study­ing con­scious ac­cess is go­ing to tell us many in­ter­est­ing things, even if it doesn’t solve liter­ally all the philo­soph­i­cal ques­tions about con­scious­ness. In the rest of this ar­ti­cle, I’ll just fol­low his con­ven­tions and use “con­scious­ness” as a syn­onym for “con­scious ac­cess”.

Un­con­scious pro­cess­ing of meaning

A key type of ex­per­i­ment in De­haene’s work is sub­limi­nal mask­ing. Test sub­jects are told to stare at a screen and re­port what they see. A com­puter pro­gram shows var­i­ous ge­o­met­ric shapes (masks) on the screen. Then at some point, the masks are for a very brief du­ra­tion re­placed with some­thing more mean­ingful, such as the word “ra­dio”. If the word “ra­dio” is sand­wiched be­tween mask shapes, show­ing it for a suffi­ciently brief time makes it in­visi­ble. The sub­jects don’t even reg­ister a brief flicker, as they might if the screen had been to­tally blank be­fore the word ap­peared.

By vary­ing the du­ra­tion for which the word is shown, re­searchers can con­trol whether or not the sub­jects see it. Around 40 mil­lisec­onds, it is in­visi­ble to ev­ery­one. Once the du­ra­tion reaches a cer­tain thresh­old, which varies some­what by per­son but is around 50 mil­lisec­onds, the word will be seen around half of the time. When peo­ple re­port not see­ing a word, they also fail to name it when asked some time af­ter the trial.

How­ever, even when a masked tar­get doesn’t make it into con­scious­ness, some part of the brain still sees it. It seems as if the vi­sual sub­sys­tem started pro­cess­ing the vi­sual stim­u­lus and pars­ing it in terms of its mean­ing, but the re­sults of those com­pu­ta­tions then never made it all the way to con­scious­ness.

One line of ev­i­dence for this are sub­limi­nal prim­ing ex­per­i­ments, not to be con­fused with the con­tro­ver­sial “so­cial prim­ing” effects in so­cial psy­chol­ogy; un­like those effects, these kinds of prim­ing ex­per­i­ments are well-defined and have been repli­cated many times. An ex­am­ple of a sub­limi­nal prim­ing ex­per­i­ment in­volves first flash­ing a hid­den word (a prime) so quickly that the par­ti­ci­pants don’t see it, then fol­low­ing it by a visi­ble word (the tar­get). For in­stance, peo­ple may be primed us­ing the word “ra­dio”, then shown the tar­get word “house”. They are then asked to clas­sify the tar­get word, by e.g. press­ing one but­ton if the tar­get word referred to a liv­ing thing and an­other but­ton if it referred to an ob­ject.

Sublimi­nal rep­e­ti­tion prim­ing refers to the find­ing that, if the prime and tar­get are the same word and sep­a­rated by less than a sec­ond, then the per­son will be quicker to clas­sify the tar­get and less likely to make a mis­take.

There are in­di­ca­tions that when this hap­pens, the brain has parsed some of the prime’s se­man­tic mean­ing and matched it against the tar­get’s mean­ing. For ex­am­ple, prim­ing works even when the prime is in lower case (ra­dio) and the tar­get is in up­per case (RADIO). This might not seem sur­pris­ing, but look at the differ­ence be­tween e.g. “a” and “A”. Th­ese are rather dis­tinct shapes, which we’ve only learned to as­so­ci­ate with each other due to cul­tural con­ven­tion. Fur­ther­more, while the prime of “range” speeds up the pro­cess­ing of “RANGE”, us­ing “anger” as a prime for “RANGE” has no effect, de­spite “range” and “anger” hav­ing the same let­ters in a differ­ent or­der. The prim­ing effect comes from the mean­ing of the prime, rather than just its vi­sual ap­pear­ance.

The pars­ing of mean­ing is not limited to words. If a chess mas­ter is shown a sim­plified chess po­si­tion for 20 mil­lisec­onds, masked so as to make it in­visi­ble, they are faster to clas­sify a visi­ble chess po­si­tion as a check if the hid­den po­si­tion was also a check, and vice versa.

I have re­ported the above re­sults as say­ing that the brain does un­con­scious pro­cess­ing about the mean­ing of what it sees, but that in­ter­pre­ta­tion has been con­tro­ver­sial. After all, some­thing like word pro­cess­ing or iden­ti­fy­ing a po­si­tion in check when you have ex­ten­sive chess ex­pe­rience, is ex­tremely over­learned and could rep­re­sent an iso­lated spe­cial case rather than show­ing that the brain pro­cesses mean­ing more gen­er­ally. The book goes into more de­tail about the his­tory of this de­bate and differ­ing in­ter­pre­ta­tions that were pro­posed; I won’t sum­ma­rize that his­tory in de­tail, but will just dis­cuss a se­lec­tion of some ex­per­i­ments which also showed un­con­scious pro­cess­ing of mean­ing.

In ar­ith­metic prim­ing ex­per­i­ments, peo­ple are first shown a masked sin­gle-digit num­ber and then a visi­ble one. They are asked to say whether the tar­get num­ber is larger or smaller than 5. When the num­ber used as a prime is con­gru­ent with the tar­get (e.g. smaller than 5 when the tar­get num­ber is also smaller than 5), peo­ple re­spond more quickly than if the two are in­con­gru­ent. Fol­low-up work has shown that the effect repli­cates even if the num­bers used as primes are shown in writ­ing (“four”) and the tar­get ones as digits (“4”). The prim­ing even works when the prime is an in­visi­ble vi­sual num­ber and the tar­get a con­scious spo­ken num­ber.

Fur­ther ex­per­i­ments have shown that the prim­ing effect is the strongest if the prime is the same num­ber as the tar­get num­ber (4 pre­ced­ing 4). The effect then de­creases the more dis­tant the prime is from the tar­get num­ber: 3 pre­ced­ing 4 shows less of a prim­ing effect, but it still has more of a prim­ing effect than 2 pre­ced­ing 4 does, and so on. Thus, the brain has done some­thing like ex­tract­ing an ab­stract rep­re­sen­ta­tion of the mag­ni­tude of the prime, and used that to in­fluence the pro­cess­ing of the ’tar­get’s mag­ni­tude.

Num­bers could also be ar­gued to be a spe­cial case for which we have spe­cial­ized pro­cess­ing, but later ex­per­i­ments have also shown con­gruity effects for words in gen­eral. For ex­am­ple, when peo­ple are shown the word “pi­ano” and asked to in­di­cate whether it is an ob­ject or an an­i­mal, prim­ing them with a word from a con­gru­ent cat­e­gory (“chair”) fa­cil­i­tates the cor­rect re­sponse, while an in­con­gru­ent prime (“cat”) hin­ders it.

Some epilepsy pa­tients have had elec­trodes in­serted into their skull for treat­ment pur­poses. Some of them have also agreed to have those elec­trodes used for this kind of re­search. When they are shown in­visi­ble “scary” words such as dan­ger, rape, or poi­son, elec­trodes im­planted near the amyg­dala—the part of the brain in­volved in fear pro­cess­ing—reg­ister an in­creased level of ac­ti­va­tion, which is ab­sent for neu­tral words such as fridge or sonata.

In one study, sub­jects were shown a “sig­nal”, and then had to guess whether to press a but­ton or not press it. As soon as they pressed it, they were told whether they had guessed cor­rectly (earn­ing money) or in­cor­rectly (los­ing money). Un­known to them, each sig­nal was pre­ceded by a masked shape, which in­di­cated the cor­rect re­sponse: one kind of a shape in­di­cated that press­ing the but­ton would earn them money, an­other shape in­di­cated that not press­ing the but­ton would earn them money, and a third one meant that ei­ther choice had an equal chance of be­ing cor­rect. Even though the sub­jects were never aware of see­ing the shape, once enough tri­als had passed, they started get­ting many more re­sults cor­rect than chance alone would in­di­cate. An un­con­scious value sys­tem had as­so­ci­ated the shapes with differ­ent ac­tions, and was us­ing the sub­limi­nal primes for choos­ing the right ac­tion.

Un­con­scious pro­cess­ing can also weigh the av­er­age value of a num­ber of vari­ables. In one type of ex­per­i­ment, sub­jects are choos­ing cards from four differ­ent decks. Each deck has cards that cause the sub­ject to ei­ther earn or lose re­ward money, with each deck hav­ing a differ­ent dis­tri­bu­tion of cards. Two of the decks are “bad”, caus­ing the sub­jects to lose money on net, and two of them are “good”, caus­ing them to gain money on net. By the end of the ex­per­i­ment, sub­jects have con­sciously figured out which one is which, and can eas­ily re­port this. How­ever, mea­sure­ments of skin con­duc­tance in­di­cate that even be­fore they have con­sciously figured out the good and bad decks, there comes a point when they’ve pul­led enough cards that be­ing about to draw a card from a bad deck causes their hands to sweat. A sub­con­scious pro­cess has already started gen­er­at­ing a pre­dic­tion of which decks are bad, and is pro­duc­ing a sub­limi­nal gut feel­ing.

A similar un­con­scious av­er­ag­ing of sev­eral vari­ables can also be shown us­ing the sub­limi­nal prim­ing paradigm. Sub­jects are shown five ar­rows one at a time, some of which point left and some of which point right. They are then asked for the di­rec­tion that the ma­jor­ity of the ar­rows were point­ing to. When the ar­rows are made in­visi­ble by sub­limi­nal mask­ing, sub­jects who are forced to guess feel like they are just mak­ing ran­dom guesses, but are in fact re­spond­ing much more ac­cu­rately than by chance alone.

There more ex­am­ples in the book, but these should be enough to con­vey the gen­eral idea: that many differ­ent sen­sory in­puts are au­to­mat­i­cally reg­istered and pro­cessed in the brain, even if they are never shown for long enough to make it all the way to con­scious­ness. Un­con­sciously pro­cessed stim­uli can even cause move­ment com­mands to be gen­er­ated in the mo­tor cor­tex and sent to the mus­cles, though not nec­es­sar­ily at an in­ten­sity which would suffi­cient to cause ac­tual ac­tion.

What about con­scious­ness, then?

So given ev­ery­thing that our brain does au­to­mat­i­cally and with­out con­scious aware­ness, what’s up with con­scious­ness? What is it, and what does it do?

Some clues can be found from in­ves­ti­gat­ing the neu­ral differ­ence be­tween con­scious and un­con­scious stim­uli. Re­mem­ber that mask­ing ex­per­i­ments show a thresh­old effect in whether a stim­u­lus is seen or not: if a stim­u­lus which is pre­ceded by a mask is shown for 40 mil­lisec­onds, then it’s in­visi­ble, but around 50 mil­lisec­onds it starts to be­come visi­ble. In ex­per­i­ments where the du­ra­tion of the stim­u­lus is care­fully varied, there is an all-or-noth­ing effect: sub­jects do not re­port see­ing more and more of the stim­u­lus as the du­ra­tion is grad­u­ally in­creased. Rather they ei­ther see it in its en­tirety, or they see noth­ing at all.

A key find­ing, repli­cated across differ­ent sen­sory modal­ities and differ­ent meth­ods for mea­sur­ing brain ac­ti­va­tion (fMRI, EEG, and MEG) is that a stim­u­lus be­com­ing con­scious in­volves an effect where, once the strength of a stim­u­lus ex­ceeds a cer­tain thresh­old, the neu­ral sig­nal as­so­ci­ated with that stim­u­lus is mas­sively boosted and spreads to re­gions in the brain which it wouldn’t have reached oth­er­wise. Ex­ceed­ing the key thresh­old causes the neu­ral sig­nal gen­er­ated by the sen­sory re­gions to be am­plified, with the re­sult that the as­so­ci­ated sig­nal could be spread more widely, rather than fad­ing away be­fore it ever reached all the re­gions.

De­haene writes, when dis­cussing an ex­per­i­ment where this was mea­sured us­ing vi­su­ally flashed words as the stim­u­lus:

By mea­sur­ing the am­pli­tude of this ac­tivity, we dis­cov­ered that the am­plifi­ca­tion fac­tor, which dis­t­in­guishes con­scious from un­con­scious pro­cess­ing, varies across the suc­ces­sive re­gions of the vi­sual in­put path­way. At the first cor­ti­cal stage, the pri­mary vi­sual cor­tex, the ac­ti­va­tion evoked by an un­seen flashed word is strong enough to be eas­ily de­tectable. How­ever, as it pro­gresses for­ward into the cor­tex, mask­ing makes it lose strength. Sublimi­nal per­cep­tion can thus be com­pared to a surf wave that looms large on the hori­zon but merely licks your feet when it reaches the shore. By com­par­i­son, con­scious per­cep­tion is a tsunami—or per­haps an avalanche is a bet­ter metaphor, be­cause con­scious ac­ti­va­tion seems to pick up strength as it pro­gresses, much as a minus­cule snow­ball gath­ers snow and ul­ti­mately trig­gers a land­slide.
To bring this point home, in my ex­per­i­ments I flashed words for only 43 mil­lisec­onds, thereby in­ject­ing min­i­mal ev­i­dence into the retina. Nev­er­the­less, ac­ti­va­tion pro­gressed for­ward and, on con­scious tri­als, cease­lessly am­plified it­self un­til it caused a ma­jor ac­ti­va­tion in many re­gions. Dis­tant brain re­gions also be­came tightly cor­re­lated: the in­com­ing wave peaked and re­ceded si­mul­ta­neously in all ar­eas, sug­gest­ing that they ex­changed mes­sages that re­in­forced one an­other un­til they turned into an un­stop­pable avalanche. Syn­chrony was much stronger for con­scious than for un­con­scious tar­gets, sug­gest­ing that cor­re­lated ac­tivity is an im­por­tant fac­tor in con­scious per­cep­tion.
Th­ese sim­ple ex­per­i­ments thus yielded a first sig­na­ture of con­scious­ness: an am­plifi­ca­tion of sen­sory brain ac­tivity, pro­gres­sively gath­er­ing strength and in­vad­ing mul­ti­ple re­gions of the pari­etal and pre­frontal lobes. This sig­na­ture pat­tern has of­ten been repli­cated, even in modal­ities out­side vi­sion. For in­stance, imag­ine that you are sit­ting in a noisy fMRI ma­chine. From time to time, through ear­phones, you hear a brief pulse of ad­di­tional sound. Un­known to you, the sound level of these pulses is care­fully set so that you de­tect only half of them. This is an ideal way to com­pare con­scious and un­con­scious per­cep­tion, this time in the au­di­tory modal­ity. And the re­sult is equally clear: un­con­scious sounds ac­ti­vate only the cor­tex sur­round­ing the pri­mary au­di­tory area, and again, on con­scious tri­als, an avalanche of brain ac­tivity am­plifies this early sen­sory ac­ti­va­tion and breaks into the in­fe­rior pari­etal and pre­frontal areas

De­haene goes into a con­sid­er­able amount of de­tail about the differ­ent neu­ronal sig­na­tures which have been found to cor­re­late with con­scious­ness, and the ex­per­i­men­tal paradigms which have been used to test whether or not those sig­na­tures are mere cor­re­lates rather than parts of the causal mechanism. I won’t re­view all of that dis­cus­sion here, but will sum­ma­rize some of his con­clu­sions.

Con­scious­ness in­volves a neu­ral sig­nal ac­ti­vat­ing self-re­in­forc­ing loops of ac­tivity, which causes wide brain re­gions to syn­chro­nize to pro­cess that sig­nal.

Con­sider what hap­pens when some­one in the au­di­ence of a perfor­mance starts clap­ping their hands, soon caus­ing the whole au­di­ence to burst into ap­plause. As one per­son starts clap­ping, other peo­ple hear it and start clap­ping in turn; this be­comes a self-re­in­forc­ing effect where your clap­ping causes other peo­ple to clap, and you are more likely to con­tinue clap­ping if other peo­ple are also still clap­ping. In a similar way, the thresh­old effect of con­scious ac­ti­va­tion seems to in­volve some neu­rons send­ing a sig­nal, caus­ing other neu­rons to ac­ti­vate and join in on broad­cast­ing that sig­nal. The ac­ti­va­tion thresh­old is a point where enough neu­rons have suffi­cient mu­tual ex­ci­ta­tion to cre­ate a self-sus­tain­ing avalanche of ex­ci­ta­tion, spread­ing through­out the brain.

The spread of ac­ti­va­tion is fur­ther fa­cil­i­tated by a “brain web” of long-dis­tance neu­rons, which link mul­ti­ple ar­eas of the brain to­gether into a densely in­ter­con­nected net­work. Not all ar­eas of the brain are or­ga­nized in this way: for in­stance, sen­sory re­gions are mostly con­nected to their im­me­di­ate neigh­bors, with vi­sual area V1 be­ing pri­mar­ily only con­nected to vi­sual area V2, and V2 mostly to V1 and V3, and so on. But higher ar­eas of the cor­tex are much more joined to­gether, in a net­work where area A pro­ject­ing ac­tivity to area B usu­ally means that area B also pro­jects ac­tivity back to area A. They also in­volve tri­an­gu­lar con­nec­tions, where re­gion A might pro­ject into re­gions B and C, which then both also pro­ject to each other and back to A. This long-dis­tance net­work joins not only ar­eas of the cor­tex, but is also con­nected to re­gions such as the tha­la­mus (as­so­ci­ated with e.g. at­ten­tion and vigilance), the basal gan­glia (in­volved in de­ci­sion-mak­ing and ac­tion), and the hip­pocam­pus (in­volved in epi­sodic mem­ory).

A stim­u­lus be­com­ing con­scious in­volves the sig­nal as­so­ci­ated with it achiev­ing enough strength to ac­ti­vate some of the as­so­ci­a­tive ar­eas that are con­nected with this net­work, which De­haene calls the “global neu­ronal workspace” (GNW). As those ar­eas start broad­cast­ing the sig­nal as­so­ci­ated with the stim­u­lus, other ar­eas in the net­work re­ceive it and start broad­cast­ing it in turn, cre­at­ing the self-sus­tain­ing loop. As this hap­pens, many differ­ent re­gions will end up pro­cess­ing the sig­nal at the same time, syn­chro­niz­ing their pro­cess­ing around the con­tents of that sig­nal. De­haene sug­gests that the things that we are con­scious of at any given mo­ment, are ex­actly the things which are be­ing pro­cessed in our GNW at that mo­ment.

De­haene de­scribes this as “a de­cen­tral­ized or­ga­ni­za­tion with­out a sin­gle phys­i­cal meet­ing site” where “an elitist board of ex­ec­u­tives, dis­tributed in dis­tant ter­ri­to­ries, stays in sync by ex­chang­ing a plethora of mes­sages”. While he mostly re­views ev­i­dence gath­ered from in­ves­ti­gat­ing sen­sory in­puts, his model holds that be­sides sen­sory ar­eas, many other re­gions—such as the ones as­so­ci­ated with mem­ory and at­ten­tion—also feed into and ma­nipu­late the con­tents of the net­work. Once a stim­u­lus en­ters the GNW, net­works reg­u­lat­ing top-down at­ten­tion can am­plify and “help keep al­ive” stim­uli which seems es­pe­cially im­por­tant to fo­cus on, and mem­ory net­works can com­mit the stim­u­lus into mem­ory, in­sert into the net­work ear­lier mem­o­ries which were trig­gered by the sight of the stim­u­lus, or both.

In the ex­per­i­ments on sub­limi­nal pro­cess­ing, an un­con­scious prime may af­fect the pro­cess­ing of a con­scious stim­u­lus that comes very soon af­ter­wards, but since its ac­ti­va­tion soon fades out, it can’t be com­mit­ted to mem­ory or ver­bally re­ported on af­ter­wards. A stim­u­lus be­com­ing con­scious and be­ing main­tained in the GNW, both keeps its sig­nal al­ive for longer, and also al­lows it bet­ter ac­cess to mem­ory net­works which may store it in or­der for it to be re-broad­cast into the GNW later.

The global workspace can only be pro­cess­ing a sin­gle item at a time.

Var­i­ous ex­per­i­ments show the ex­is­tence of an “at­ten­tional blink”: if your at­ten­tion is strongly fo­cused on one thing, it takes some time to dis­en­gage from it and re­ori­ent your at­ten­tion to some­thing else. For in­stance, in one ex­per­i­ment peo­ple are shown a stream of sym­bols. Most of the sym­bols are digits, but some are let­ters. Peo­ple are told to re­mem­ber the let­ters. While the first let­ter is easy to re­mem­ber, if two let­ters are shown in rapid suc­ces­sion, the sub­jects might not even re­al­ize that two of them were pre­sent—and they might be sur­prised to learn that this was the case. The act of at­tend­ing to the first let­ter enough to mem­o­rize it, cre­ates a “blink of the mind” which pre­vents the sec­ond let­ter from ever be­ing no­ticed.

De­haene’s ex­pla­na­tion for this is that the GNW can only be pro­cess­ing a sin­gle item at once. The first let­ter is seen, pro­cessed by the early vi­sual cen­ters, then reaches suffi­cient strength to make it into the workspace. This causes the workspace neu­rons to syn­chro­nize their pro­cess­ing around the first let­ter and try to keep the sig­nal ac­tive for long enough for it to be mem­o­rized—and while they are still do­ing so, the sec­ond let­ter shows up. It is also pro­cessed by the vi­sual re­gions and makes it to the as­so­ci­a­tive re­gion, but the at­ten­tion net­works are still re­in­forc­ing the sig­nal as­so­ci­ated with the origi­nal let­ter and keep­ing it ac­tive in the workspace. The new let­ter can’t muster enough ac­ti­va­tion in time to get its sig­nal broad­cast into the workspace, so by the time the ac­ti­va­tion gen­er­ated by the first let­ter starts to fade, the sig­nal from the sec­ond let­ter has also faded out. As a re­sult, the sec­ond sig­nal never makes it to the workspace where it could leave a con­scious mem­ory trace of hav­ing been ob­served.

When two si­mul­ta­neous events hap­pen, it doesn’t always mean that aware­ness of the other one is sup­pressed. If there isn’t too much dis­trac­tion—due to “in­ter­nal noise, dis­tract­ing thoughts, or other in­com­ing stim­uli”—the sig­nal of the sec­ond event may sur­vive for long enough in an un­con­scious buffer, mak­ing it to the GNW af­ter the first event has been pro­cessed. The use of a post-stim­u­lus mask­ing shape in the sub­limi­nal mask­ing ex­per­i­ments helps erase the con­tents of this buffer, by pro­vid­ing a new stim­u­lus that over­writes the old one. In these cases, peo­ple’s judg­ment of the timing of the events is sys­tem­at­i­cally wrong: rather than ex­pe­rienc­ing the events to have hap­pened si­mul­ta­neously, they be­lieve the sec­ond event to have hap­pened at the time when the event en­tered their con­scious­ness.

As an in­ter­est­ing aside, as a re­sult of these effects, the con­tent of our con­scious­ness is always slightly de­layed rel­a­tive to when an event ac­tu­ally hap­pened—a stim­u­lus get­ting into the GNW takes at least one-third of a sec­ond, and may take sub­stan­tially longer if we are dis­tracted. The brain con­tains a num­ber of mechanisms for com­pen­sat­ing the de­lay in GNW ac­cess, such as pre­dic­tion mechanisms which an­ti­ci­pate how fa­mil­iar events should hap­pen be­fore they’ve ac­tu­ally hap­pened.

Dis­rupt­ing or stim­u­lat­ing the GNW, has the effects that this the­ory would pre­dict.

One of the lines of ar­gu­ment by which De­haene defends the claim that GNW ac­tivity is gen­uinely the same thing as con­scious ac­tivity, and not a more cor­re­late, is that ar­tifi­cially in­terfer­ing with GNW ac­tivity has the kinds of effects that we might ex­pect.

To do this, we can use Tran­scra­nial Ma­gen­tic Stim­u­la­tion (TMS) to cre­ate mag­netic fields which stim­u­late elec­tric ac­tivity in the brain, or if elec­trodes have been placed in a per­son’s brain, those can be used to stim­u­late the brain di­rectly.

In one ex­per­i­ment, TMS was used to stim­u­late the vi­sual cor­tex of test sub­jects, in a way that cre­ated a hal­lu­ci­na­tion of light. By vary­ing the in­ten­sity of the stim­u­la­tion, the re­searchers could con­trol whether or not the sub­jects no­ticed any­thing. On tri­als when the sub­jects re­ported be­com­ing con­scious of a hal­lu­ci­na­tion, an avalanche wave as­so­ci­ated with con­scious­ness popped up, reach­ing con­scious­ness faster than nor­mal. In De­haene’s in­ter­pre­ta­tion, the mag­netic pulse by­passed the nor­mal ini­tial pro­cess­ing stages for vi­sion and in­stead cre­ated a neu­ronal ac­ti­va­tion di­rectly at a higher cor­ti­cal area, speed­ing up con­scious ac­cess by about 0.1 sec­onds.

Ex­per­i­ments have also used TMS to suc­cess­fully erase aware­ness of a stim­u­lus. One ex­per­i­ment de­scribed in the book uses a dual TMS setup. First, a sub­ject is zapped with a mag­netic pulse that causes them to see a bit of (non-ex­is­tent) move­ment. After it has been con­firmed that sub­jects re­port be­com­ing con­scious of move­ment when they are zapped with the first pulse, they are then sub­jected to a trial where they are first zapped with the same pulse, then im­me­di­ately there­after with an­other pulse that’s aimed to dis­rupt the sig­nal from get­ting ac­cess to the GNW. When this is done, sub­jects re­port no longer be­ing aware of hav­ing seen any move­ment.

The func­tions of consciousness

So what ex­actly is the func­tion of con­scious­ness? De­haene offers four differ­ent func­tions.

Con­scious sam­pling of un­con­scious statis­tics and in­te­gra­tion of com­pli­cated information

Sup­pose that you a Bayesian de­ci­sion the­o­rist try­ing to choose be­tween two op­tions, A and B. For each two op­tions, you’ve com­puted a prob­a­bil­ity dis­tri­bu­tion about the pos­si­ble out­comes that may re­sult if you choose ei­ther A or B. In or­der to ac­tu­ally make your choice, you need to col­lapse your prob­a­bil­ity dis­tri­bu­tions into a point es­ti­mate of the ex­pected value of choos­ing A ver­sus B, to know which one is ac­tu­ally bet­ter.

In De­haene’s ac­count, con­scious­ness does some­thing like this. We have a num­ber of un­con­scious sys­tems which are con­stantly do­ing Bayesian statis­tics and con­struct­ing prob­a­bil­ity dis­tri­bu­tions about how to e.g. in­ter­pret vi­su­ally am­bigu­ous stim­uli, weigh­ing mul­ti­ple hy­pothe­ses at the same time. In or­der for de­ci­sion-mak­ing to ac­tu­ally be car­ried out, the sys­tem has to choose one of the in­ter­pre­ta­tions, and act based on the as­sump­tion of that in­ter­pre­ta­tion be­ing cor­rect. The hy­poth­e­sis that the un­con­scious pro­cess se­lects as cor­rect, is then what gets fed into con­scious­ness. For ex­am­ple, when I look at the cup of tea in front of me, I don’t see a vast jum­ble of shift­ing hy­pothe­ses of what this vi­sual in­for­ma­tion might rep­re­sent: rather, I just see what I think is a cup of tea, which is what a sub­con­scious pro­cess has cho­sen as the most likely in­ter­pre­ta­tion.

De­haene offers the anal­ogy of the US Pres­i­dent be­ing briefed by the FBI. The FBI is a vast or­ga­ni­za­tion, with thou­sands of em­ploy­ees: they are con­stantly shift­ing through enor­mous amounts of data, and form­ing hy­pothe­ses about top­ics which have na­tional se­cu­rity rele­vance. But it would be use­less for the FBI to pre­sent to the Pres­i­dent ev­ery sin­gle re­port col­lected by ev­ery sin­gle field agent, as well as ev­ery anal­y­sis com­piled by ev­ery sin­gle an­a­lyst in re­sponse. Rather, the FBI needs to in­ter­nally set­tle on some over­all sum­mary of what they be­lieve is go­ing on, and then pre­sent that to the Pres­i­dent, who can then act based on the in­for­ma­tion. Similarly, De­haene sug­gests that con­scious­ness is a place where differ­ent brain sys­tems can ex­change sum­maries of their mod­els, and to in­te­grate con­flict­ing ev­i­dence in or­der to ar­rive to an over­all con­clu­sion.

De­haene dis­cusses a few ex­per­i­ments which lend sup­port this in­ter­pre­ta­tion, though here the dis­cus­sion seems some­what more spec­u­la­tive than in other parts of the book. One of his pieces of ev­i­dence is of record­ings of neu­ronal cir­cuits which in­te­grate many parts of a vi­sual scene into an over­all image, re­solv­ing lo­cal am­bi­gui­ties by us­ing in­for­ma­tion from other parts of the image. Un­der anes­the­sia, neu­ronal record­ings show that this in­te­gra­tion pro­cess is dis­rupted; con­scious­ness “is needed for neu­rons to ex­change sig­nals in both bot­tom-up and top-down di­rec­tions un­til they agree with each other”. Another ex­per­i­ment shows that if peo­ple are shown an ar­tifi­cial stim­u­lus which has been de­liber­ately crafted to be am­bigu­ous, peo­ple’s con­scious im­pres­sion of the cor­rect in­ter­pre­ta­tion keeps shift­ing: first it’s one in­ter­pre­ta­tion, then the other. By vary­ing the pa­ram­e­ters of the stim­u­lus, re­searchers can con­trol roughly how of­ten peo­ple see each in­ter­pre­ta­tion. If Bayesian statis­tics would sug­gest that in­ter­pre­ta­tion A was 30% likely and in­ter­pre­ta­tion B 70% likely, say, then peo­ple’s im­pres­sion of the image will keep shift­ing so that they will see in­ter­pre­ta­tion A roughly 30% of the time and in­ter­pre­ta­tion B roughly 70% of the time.

What we see, at any time, tends to be the most likely in­ter­pre­ta­tion, but other pos­si­bil­ities oc­ca­sion­ally pop up and stay in our con­scious vi­sion for a time du­ra­tion that is pro­por­tional to their statis­ti­cal like­li­hood. Our un­con­scious per­cep­tion works out the prob­a­bil­ities—and then our con­scious­ness sam­ples from them at ran­dom.

In De­haene’s ac­count, con­scious­ness is in­volved in higher-level in­te­gra­tion of the mean­ing of con­cepts. For in­stance, our un­der­stand­ing of a paint­ing such as the Mona Lisa is com­posed of many differ­ent things. Per­son­ally, if I think about the Mona Lisa, I see a men­tal image of the paint­ing it­self, I get an as­so­ci­a­tion with the coun­try of Italy, I re­mem­ber hav­ing first learned about the paint­ing in a Don­ald Duck story, and I also re­mem­ber my friend tel­ling me about the time she saw the origi­nal paint­ing it­self. Th­ese are differ­ent pieces of in­for­ma­tion, stored in differ­ent for­mats in differ­ent re­gions of the brain, and the kind of global neu­ronal in­te­gra­tion car­ried out by the GNW al­lows all of these differ­ent in­ter­pre­ta­tions to come to­gether, with ev­ery sys­tem par­ti­ci­pat­ing in con­struct­ing an over­all co­her­ent, syn­chronous in­ter­pre­ta­tion.

All of this sounds sen­si­ble enough. At the same, af­ter all the pre­vi­ous dis­cus­sion about un­con­scious de­ci­sion-mak­ing and un­con­scious in­te­gra­tion of in­for­ma­tion, this leaves me feel­ing some­what un­satis­fied. If it has been shown that e.g. un­con­sciously pro­cessed cues are enough to guide our de­ci­sion-mak­ing, then how do we square that with the claim that con­scious­ness is nec­es­sary for set­tling on a sin­gle in­ter­pre­ta­tion that would al­low us to take ac­tions?

My in­ter­pre­ta­tion is that even though un­con­scious pro­cess­ing and de­ci­sion-mak­ing hap­pens, its effect is rel­a­tively weak. If you prime peo­ple with a masked stim­u­lus, then that in­fluences their de­ci­sion-mak­ing so as to give them bet­ter perfor­mance—but it doesn’t give them perfect perfor­mance. In the ex­per­i­ment where masked cues pre­dicted the right ac­tion and un­con­scious learn­ing as­so­ci­ated each cue with the rele­vant ac­tion, the sub­jects only ended up with an av­er­age of 63% cor­rect ac­tions.

Look­ing at the cited pa­per it­self, the au­thors them­selves note that if the cues had been visi­ble, it would only have taken a cou­ple of tri­als for the sub­jects to learn the op­ti­mal be­hav­iors. In the ac­tual ex­per­i­ment, their perfor­mance slowly im­proved un­til it reached a plateau around 60 tri­als. Thus, even though un­con­scious learn­ing and de­ci­sion-mak­ing hap­pens, con­scious learn­ing and de­ci­sion-mak­ing can be sig­nifi­cantly more effec­tive.

Se­cond, while I don’t see De­haene men­tion­ing it, I’ve always liked the PRISM the­ory of con­scious­ness, which sug­gests that one of the func­tions of con­scious­ness is to be a place for re­solv­ing con­flict­ing plans for con­trol­ling the skele­tal mus­cles. In the un­con­scious de­ci­sion-mak­ing ex­per­i­ments, the tasks have mostly been pretty sim­ple, and only in­volved the kinds of goals that could all be en­cap­su­lated within a sin­gle mo­ti­va­tional sys­tem. In real life how­ever, we of­ten run into situ­a­tions where differ­ent brain sys­tems out­put con­flict­ing in­struc­tions. For in­stance, if we are car­ry­ing a hot cup of tea, our de­sire to drop the cup may be com­pet­ing against our de­sire to carry it to the table, and these may have their ori­gin in very differ­ent sorts of mo­ti­va­tions. In­for­ma­tion from both sys­tems would need to be taken into ac­count and in­te­grated in or­der to make an over­all de­ci­sion.

To stretch De­haene’s FBI metaphor: as long as the FBI is do­ing things that fall within their ju­ris­dic­tion and they are equipped to han­dle, then they can just do that with­out get­ting in con­tact with the Pres­i­dent. But if the head of the FBI and the head of the CIA have con­flict­ing ideas about what should be done, on a topic on which the two agen­cies have over­lap­ping ju­ris­dic­tion, then it might be nec­es­sary to bring the dis­agree­ment out in the open so that a higher-up can make the call. Of course, there isn’t any sin­gle “Pres­i­dent” in the brain who would make the fi­nal de­ci­sion: rather, it’s more like the chiefs of all the other alpha­bet soup bu­reaus were also called in, and they then hashed out the de­tails of their un­der­stand­ing un­til they came to a shared agree­ment about what to do.

Last­ing thoughts and work­ing memory

As already touched upon, con­scious­ness is as­so­ci­ated with mem­ory. Un­con­sciously reg­istered in­for­ma­tion tends to fade very quickly and then dis­ap­pear. In all the mask­ing ex­per­i­ments, the du­ra­tion be­tween the prime and the tar­get is very brief; if the du­ra­tion would be any longer, there would be no learn­ing or effect on de­ci­sion-mak­ing. For e.g. as­so­ci­at­ing cues and out­comes with each other over an ex­tended pe­riod of time, the cue has to be con­sciously per­ceived.

De­haene de­scribes an ex­per­i­ment which demon­strates ex­actly this:

The cog­ni­tive sci­en­tists Robert Clark and Larry Squire con­ducted a won­der­fully sim­ple test of tem­po­ral syn­the­sis: time-lapse con­di­tion­ing of the eye­lid re­flex. At a pre­cisely timed mo­ment, a pneu­matic ma­chine puffs air to­ward the eye. The re­ac­tion is in­stan­ta­neous: in rab­bits and hu­mans al­ike, the pro­tec­tive mem­brane of the eye­lid im­me­di­ately closes. Now pre­cede the de­liv­ery of air with a brief warn­ing tone. The out­come is called Pavlo­vian con­di­tion­ing (in mem­ory of the Rus­sian phys­iol­o­gist Ivan Petro­vich Pavlov, who first con­di­tioned dogs to sal­i­vate at the sound of a bell, in an­ti­ci­pa­tion of food). After a short train­ing, the eye blinks to the sound it­self, in an­ti­ci­pa­tion of the air puff. After a while, an oc­ca­sional pre­sen­ta­tion of the iso­lated tone suffices to in­duce the “eyes wide shut” re­sponse.
The eye-clo­sure re­flex is fast, but is it con­scious or un­con­scious? The an­swer, sur­pris­ingly, de­pends on the pres­ence of a tem­po­ral gap. In one ver­sion of the test, usu­ally termed “de­layed con­di­tion­ing,” the tone lasts un­til the puff ar­rives. Thus the two stim­uli briefly co­in­cide in the an­i­mal’s brain, mak­ing the learn­ing a sim­ple mat­ter of co­in­ci­dence de­tec­tion. In the other, called “trace con­di­tion­ing,” the tone is brief, sep­a­rated from the sub­se­quent air puff by an empty gap. This ver­sion, al­though min­i­mally differ­ent, is clearly more challeng­ing. The or­ganism must keep an ac­tive mem­ory trace of the past tone in or­der to dis­cover its sys­tem­atic re­la­tion to the sub­se­quent air puff. To avoid any con­fu­sion, I will call the first ver­sion “co­in­ci­dence-based con­di­tion­ing” (the first stim­u­lus lasts long enough to co­in­cide with the sec­ond, thus re­mov­ing any need for mem­ory) and the sec­ond “mem­ory-trace con­di­tion­ing” (the sub­ject must keep in mind a mem­ory trace of the sound in or­der to bridge the tem­po­ral gap be­tween it and the ob­nox­ious air puff).
The ex­per­i­men­tal re­sults are clear: co­in­ci­dence-based con­di­tion­ing oc­curs un­con­sciously, while for mem­ory-trace con­di­tion­ing, a con­scious mind is re­quired. In fact, co­in­ci­dence-based con­di­tion­ing does not re­quire any cor­tex at all. A de­cere­brate rab­bit, with­out any cere­bral cor­tex, basal gan­glia, lim­bic sys­tem, tha­la­mus, and hy­potha­la­mus, still shows eye­lid con­di­tion­ing when the sound and the puff over­lap in time. In mem­ory-trace con­di­tion­ing, how­ever, no learn­ing oc­curs un­less the hip­pocam­pus and its con­nected struc­tures (which in­clude the pre­frontal cor­tex) are in­tact. In hu­man sub­jects, mem­ory-trace learn­ing seems to oc­cur if and only if the per­son re­ports be­ing aware of the sys­tem­atic pre­dic­tive link be­tween the tone and the air puff. Elderly peo­ple, am­ne­si­acs, and peo­ple who were sim­ply too dis­tracted to no­tice the tem­po­ral re­la­tion­ship show no con­di­tion­ing at all (whereas these ma­nipu­la­tions have no effect what­so­ever on co­in­ci­dence-based con­di­tion­ing). Brain imag­ing shows that the sub­jects who gain aware­ness are pre­cisely those who ac­ti­vate their pre­frontal cor­tex and hip­pocam­pus dur­ing the learn­ing.

Car­ry­ing out ar­tifi­cial se­rial operations

Con­sider what hap­pens when you calcu­late 12 * 13 in your head.

When you do so, you have some con­scious aware­ness of the steps in­volved: maybe you first re­mem­ber that 12 * 12 = 144 and then add 144 + 12, or maybe you first mul­ti­ply 12 * 10 = 120 and then keep that re­sult in mem­ory as you mul­ti­ply 12 * 3 = 36 and then add 120 + 36. Re­gard­less of the strat­egy, the calcu­la­tion hap­pens con­sciously.

De­haene holds that this kind of multi-step ar­ith­metic can’t hap­pen un­con­sciously. We can do sin­gle-step ar­ith­metic un­con­sciously: for ex­am­ple, peo­ple can be shown a sin­gle masked digit n, and then be asked to carry out one of three op­er­a­tions. Peo­ple might be asked to name the digit (the “n” task), to add 2 to n and re­port the re­sult­ing the num­ber (the “n + 2” task), or to re­port whether or not it’s smaller than 5 (the “n < 5” task). On all of these tasks, even if peo­ple haven’t con­sciously seen the digit, when they are forced to guess they typ­i­cally get the right an­swer half of the time.

How­ever, un­con­scious two-step ar­ith­metic fails. If peo­ple are flashed an in­visi­ble digit and told to first add 2 to it, and then re­port whether the re­sult is more or less than 5 (the “(n + 2) > 5” task), their perfor­mance is on the chance level. The un­con­scious mind can carry out a sin­gle ar­ith­metic op­er­a­tion, but it can’t then store the re­sult of that op­er­a­tion and use it as the in­put of a sec­ond op­er­a­tion, even though it could carry out ei­ther of the two op­er­a­tions alone.

De­haene notes that this might seem to con­tra­dict a pre­vi­ous find­ing, which is that the un­con­scious brain can ac­cu­mu­late mul­ti­ple pieces of in­for­ma­tion over time. For in­stance, in the ar­row ex­per­i­ment, peo­ple were shown sev­eral masked ar­rows one at a time; at the end, they could tell whether most of them had been point­ing to the left or to the right. De­haene says that the differ­ence is that open­ing a neu­ral cir­cuit which ac­cu­mu­lates mul­ti­ple ob­ser­va­tions is a sin­gle op­er­a­tion for the brain: and while the ac­cu­mu­la­tor stores in­for­ma­tion of how many ar­rows have been ob­served so far, that in­for­ma­tion can’t be taken out of it and used as an in­put for a sec­ond calcu­la­tion.

The ac­cu­mu­la­tor also can’t reach a de­ci­sion by it­self: for in­stance, if peo­ple saw the ar­rows con­sciously, they could reach a de­ci­sion af­ter hav­ing seen three ar­rows that pointed one way, know­ing that the re­main­ing ar­rows couldn’t change the over­all de­ci­sion any­more. In un­con­scious tri­als, they can’t use this kind of strate­gic rea­son­ing: the un­con­scious cir­cuit can only keep adding up the ar­rows, rather than adding up the ar­rows and also check­ing whether a rule of type “if seen_ar­rows > 3” has been satis­fied yet.

Ac­cord­ing to De­haene, im­ple­ment­ing such rules is one of the func­tions of con­scious­ness. In fact, he ex­plic­itly com­pares con­scious­ness to a pro­duc­tion sys­tem: an AI de­sign which holds a num­ber of ob­jects in a work­ing mem­ory, and also con­tains a num­ber of IF-THEN rules, such as “if there is an A in work­ing mem­ory, change it to se­quence BC”. If mul­ti­ple rules match, one of them is cho­sen for ex­e­cu­tion ac­cord­ing to some crite­ria. After one of the rules has fired, the con­tents of the work­ing mem­ory gets up­dated, and the cy­cle re­peats. The con­scious mind, De­haene says, works us­ing a similar prin­ci­ple—cre­at­ing a biolog­i­cal Tur­ing ma­chine that can com­bine op­er­a­tions from a num­ber of neu­ronal mod­ules, flex­ibly chain­ing them to­gether for se­rial ex­e­cu­tion.

A so­cial shar­ing device

If a thought is con­scious, we can de­scribe it and re­port it to other peo­ple us­ing lan­guage. I won’t elab­o­rate on this, given that the ad­van­tages of be­ing able to use lan­guage to com­mu­ni­cate with oth­ers are pre­sum­ably ob­vi­ous. I’ll just note that De­haene high­lights one in­ter­est­ing per­spec­tive: one where other peo­ple are viewed as ad­di­tional mod­ules that can carry out trans­for­ma­tions on the ob­jects in the workspace.

Whether it’s a sub­sys­tem in the brain that’s ap­ply­ing pro­duc­tion rules to the workspace con­tents, or whether you are com­mu­ni­cat­ing the con­tents to an­other per­son who then com­ments on it (as guided by some sub­sys­tem in their brain), the same prin­ci­ple of “pro­duc­tion rules trans­form­ing the workspace con­tents” still ap­plies. Only in one of the cases, the rules and trans­for­ma­tions come from sub­sys­tems that are lo­cated within a sin­gle brain, and in the other case sub­sys­tems from mul­ti­ple brains are en­gaged in joint ma­nipu­la­tion of the con­tents—though of course the lin­guis­tic trans­mis­sion is lossy, since sub­sys­tems in mul­ti­ple brains can’t com­mu­ni­cate with the same band­width as sub­sys­tems in a sin­gle brain. (Yet.)

Other stuff

De­haene also dis­cusses a bunch of other things in his book: for in­stance, he talks about co­matose pa­tients and how his re­search has been ap­plied to study their brains, in or­der to pre­dict which pa­tients will even­tu­ally re­cover and which ones will re­main per­ma­nently un­re­spon­sive. This is pretty cool, and feels like a con­fir­ma­tion of the the­o­ries be­ing on the right track, but since it’s no longer elab­o­rat­ing on the mechanisms and func­tions of con­scious­ness, I won’t cover that here.

Take­aways for the rest of the sequence

This has been a pretty long post. Now that we’re at the end, I’m just go­ing to high­light a few of the points which will be most im­por­tant when we go for­ward in the mul­ti­a­gent minds se­quence:

  • Con­scious­ness can only con­tain a sin­gle men­tal ob­ject at a time.

  • The brain has mul­ti­ple differ­ent sys­tems do­ing differ­ent things; many of the sys­tems do un­con­scious pro­cess­ing of in­for­ma­tion. When a men­tal ob­ject be­comes con­scious, many sys­tems will syn­chro­nize their pro­cess­ing around an­a­lyz­ing and ma­nipu­lat­ing that men­tal ob­ject.

  • The brain can be com­pared to a pro­duc­tion sys­tem, with a large num­ber of spe­cial­ized rules which fire in re­sponse to spe­cific kinds of men­tal ob­jects. E.g. when do­ing men­tal ar­ith­metic, ap­ply­ing the right se­quence of ar­ith­metic op­er­a­tions for achiev­ing the main goal.

If we take the view of look­ing at var­i­ous neu­ral sys­tems as be­ing liter­ally tech­ni­cally sub­agents, then we can re­frame the above points as fol­lows:

  • The brain has mul­ti­ple sub­agents do­ing differ­ent things; many of the sub­agents do un­con­scious pro­cess­ing of in­for­ma­tion. When a men­tal ob­ject be­comes con­scious, many sub­agents will syn­chro­nize their pro­cess­ing around an­a­lyz­ing and ma­nipu­lat­ing that men­tal ob­ject.

  • The col­lec­tive of sub­agents can only have their joint at­ten­tion fo­cused on one men­tal ob­ject at a time.

  • The brain can be com­pared to a pro­duc­tion sys­tem, with a large num­ber of sub­agents car­ry­ing out var­i­ous tasks when they see the kinds of men­tal ob­jects that they care about. E.g. when do­ing men­tal ar­ith­metic, ap­ply­ing the right se­quence of men­tal op­er­a­tions for achiev­ing the main goal.

Next up: con­struct­ing a mechanis­tic sketch of how a mind might work, com­bin­ing the above points as well as the kinds of mechanisms that have already been demon­strated in con­tem­po­rary ma­chine learn­ing, to fi­nally end up with a model that pretty closely re­sem­bles the In­ter­nal Fam­ily Sys­tems one.