Subagents, neural Turing machines, thought selection, and blindspots

In my sum­mary of Con­scious­ness and the Brain (De­haene, 2014), I briefly men­tioned that one of the func­tions of con­scious­ness is to carry out ar­tifi­cial se­rial op­er­a­tions; or in other words, im­ple­ment a pro­duc­tion sys­tem (equiv­a­lent to a Tur­ing ma­chine) in the brain.

While I did not go into very much de­tail about this model in the post, I’ve used it in later ar­ti­cles. For in­stance, in Build­ing up to an In­ter­nal Fam­ily Sys­tems model, I used a toy model where differ­ent sub­agents cast votes to mod­ify the con­tents of con­scious­ness. One may con­cep­tu­al­ize this as equiv­a­lent to the pro­duc­tion sys­tem model, where differ­ent sub­agents im­ple­ment differ­ent pro­duc­tion rules which com­pete to mod­ify the con­tents of con­scious­ness.

In this post, I will flesh out the model a bit more, as well as ap­ply­ing it to a few other ex­am­ples, such as emo­tion sup­pres­sion, in­ter­nal con­flict, and blind spots.

Ev­i­dence accumulation

De­haene has out­lined his model in a pair of pa­pers (Zylber­berg, De­haene, Roelfsema, & Sig­man, 2011; De­haene & Sig­man, 2012), though he is not the first one to pro­pose this kind of a model. Daniel Den­nett’s Con­scious­ness Ex­plained (1991) also dis­cusses con­scious­ness as im­ple­ment­ing a vir­tual Tur­ing ma­chine; both cite as ex­am­ples ear­lier com­pu­ta­tional mod­els of the mind, such as Soar and ACT, which work on the same prin­ci­ples.

An im­por­tant build­ing block in De­hane’s model is based on what we know about ev­i­dence ac­cu­mu­la­tion and de­ci­sion-mak­ing in the brain, so let’s start by tak­ing a look at that.

Se­quen­tial sam­pling mod­els (SSMs) are a fam­ily of mod­els from math­e­mat­i­cal psy­chol­ogy that have been de­vel­oped since the 1960s (Forstmann, Rat­cliff, & Wa­gen­mak­ers, 2016). A par­tic­u­larly com­mon SSM is the diffu­sion de­ci­sion model (DDM) of de­ci­sion-mak­ing, in which a de­ci­sion-maker is as­sumed to nois­ily ac­cu­mu­late ev­i­dence to­wards a par­tic­u­lar choice. Once the ev­i­dence in fa­vor of a par­tic­u­lar choice meets a de­ci­sion thresh­old, that choice is taken.

For ex­am­ple, some­one might be shown dots on a screen, some of which are mov­ing in a cer­tain di­rec­tion. The task is to tell which di­rec­tion the dots are mov­ing in. After the per­son has seen enough dot move­ments, they will have suffi­cient con­fi­dence to make their judg­ment. The difficulty of the task can be pre­cisely varied by chang­ing the pro­por­tion of mov­ing dots and their speed, mak­ing the move­ment eas­ier or harder to de­tect. One can then mea­sure how such changes af­fect the time needed for peo­ple to make a judg­ment.

A DDM is a sim­ple model with just four pa­ram­e­ters:

  • de­ci­sion thresh­old: a thresh­old for the amount of ev­i­dence in fa­vor of one op­tion which causes that op­tion to be chosen

  • start­ing point bias: a per­son may start bi­ased to­wards one par­tic­u­lar al­ter­na­tive, which can be mod­eled by them hav­ing some ini­tial ev­i­dence putting them closer to one thresh­old than the other

  • drift rate: the av­er­age amount of ev­i­dence ac­cu­mu­lated per time unit

  • non-de­ci­sion time: when mea­sur­ing e.g. re­ac­tion times, a de­lay in­tro­duced by fac­tors such as per­cep­tual pro­cess­ing which take time but are not in­volved in the de­ci­sion pro­cess itself

Th­ese pa­ram­e­ters can be mea­sured from be­hav­ioral ex­per­i­ments, and the model man­ages to fit a wide va­ri­ety of be­hav­ioral ex­per­i­ments and in­tu­itive phe­nom­ena well (Forstmann et al., 2016; Rat­cliff, Smith, Brown, & McKoon, 2016; Roberts & Hutch­er­son, 2019). For ex­am­ple, eas­ier-to-per­ceive ev­i­dence in fa­vor of a par­tic­u­lar op­tion is re­flected in a faster drift rate to­wards the de­ci­sion thresh­old, caus­ing faster de­ci­sions. On the other hand, mak­ing mis­takes or be­ing falsely told that one’s perfor­mance on a trial is be­low that of most other par­ti­ci­pants prompts cau­tion, in­creas­ing peo­ple’s de­ci­sion thresh­olds and slow­ing down re­sponse times (Roberts & Hutch­er­son, 2019).

While the mod­els have been stud­ied the most in the con­text of bi­nary de­ci­sions, one can eas­ily ex­tend the model to a choice be­tween n al­ter­na­tives by as­sum­ing the ex­is­tence of mul­ti­ple ac­cu­mu­la­tors, each ac­cu­mu­lat­ing de­ci­sion to­wards their own choice, pos­si­bly in­hibit­ing the oth­ers in the pro­cess. Neu­ro­science stud­ies have iden­ti­fied struc­tures which seem to cor­re­spond to var­i­ous parts of SSMs. For ex­am­ple, in ran­dom dot mo­tion tasks, where par­ti­ci­pants have to in­di­cate the di­rec­tion that dots on a screen are mov­ing in,

the firing rates of di­rec­tion se­lec­tive neu­rons in the vi­sual cor­tex (area MT/​V5) ex­hibit a roughly lin­ear in­crease (or de­crease) as a func­tion of the strength of mo­tion in their preferred (or anti-preferred) di­rec­tion. The av­er­age firing rate from a pool of neu­rons shar­ing similar di­rec­tion prefer­ences pro­vides a time vary­ing sig­nal that can be com­pared to an av­er­age of an­other, op­pos­ing pool. This differ­ence can be pos­i­tive or nega­tive, re­flect­ing the mo­men­tary ev­i­dence in fa­vor of one di­rec­tion and against the other. (Shadlen & Sho­hamy, 2016)

Shadlen & Sho­hamy (2016) note that ex­per­i­ments on more “real-world” de­ci­sions, such as de­ci­sions on which stock to pick or which snack to choose, also seem to be com­pat­i­ble with an SSM frame­work. How­ever, this raises a few ques­tions. For in­stance, it makes in­tu­itive sense why peo­ple would take more time on a ran­dom mo­tion task when they lose con­fi­dence: watch­ing the move­ments for a longer time ac­cu­mu­lates more ev­i­dence for the right an­swer, un­til the de­ci­sion thresh­old is met. But what is the ad­di­tional ev­i­dence that is be­ing ac­cu­mu­lated in the case of mak­ing a de­ci­sion based on sub­jec­tive value?

The au­thors make an anal­ogy to a sym­bol task which has been stud­ied in rhe­sus mon­keys. The mon­keys need to de­cide be­tween two choices, one of which is cor­rect. For this task, they are shown a se­ries of sym­bols, each of which pre­dicts one of the choices as be­ing cor­rect with some prob­a­bil­ity. Through ex­pe­rience, the mon­keys come to learn the weight of ev­i­dence car­ried by each sym­bol. In effect, they are ac­cu­mu­lat­ing ev­i­dence not by mo­tion dis­crim­i­na­tion but mem­ory re­trieval: re­triev­ing some pre-learned as­so­ci­a­tion be­tween a sym­bol and its as­signed weight. This “leads to an in­cre­men­tal change in the firing rate of LIP neu­rons that rep­re­sent the cu­mu­la­tive [like­li­hood ra­tio] in fa­vor of the tar­get”.

The pro­posal is that hu­mans make choices based on sub­jec­tive value us­ing a similar pro­cess: by per­ceiv­ing a pos­si­ble op­tion and then re­triev­ing mem­o­ries which carry in­for­ma­tion about the value of that op­tion. For in­stance, when de­cid­ing be­tween an ap­ple and a choco­late bar, some­one might re­call how ap­ples and choco­late bars have tasted in the past, how they felt af­ter eat­ing them, what kinds of as­so­ci­a­tions they have about the healthi­ness of ap­ples vs. choco­late, any other emo­tional as­so­ci­a­tions they might have (such as fond mem­o­ries of their grand­mother’s ap­ple pie) and so on.

Shadlen & Sho­hamy fur­ther hy­poth­e­size that the rea­son why the de­ci­sion pro­cess seems to take time is that differ­ent pieces of rele­vant in­for­ma­tion are found in phys­i­cally dis­parate mem­ory net­works and neu­ronal sites. Ac­cess from the mem­ory net­works to the ev­i­dence ac­cu­mu­la­tor neu­rons is phys­i­cally bot­tle­necked by a limited num­ber of “pipes”. Thus, a num­ber of differ­ent mem­ory net­works need to take turns in ac­cess­ing the pipe, caus­ing a se­rial de­lay in the ev­i­dence ac­cu­mu­la­tion pro­cess.


The biolog­i­cal Tur­ing machine

In Con­scious­ness and the Brain, De­haene con­sid­ers the ex­am­ple of do­ing ar­ith­metic. Some­one who is calcu­lat­ing some­thing like 12 * 13 in their head, might first mul­ti­ply 10 by 12, keep the re­sult in mem­ory, mul­ti­ply 3 by 12, and then add the re­sults to­gether. Thus, if a cir­cuit in the brain has learned to do mul­ti­pli­ca­tion, con­scious­ness can be used to route its re­sults to a tem­po­rary mem­ory stor­age, with those re­sults then be­ing routed from the stor­age to a cir­cuit that does ad­di­tion.

Pro­duc­tion sys­tems in AI are com­posed of if-then rules (pro­duc­tion rules) which mod­ify the con­tents of mem­ory: one might work by de­tect­ing the pres­ence of an item like “10 * 12” and rewrit­ing it as “120”. On a con­cep­tual level, the brain is pro­posed to do some­thing similar: var­i­ous con­tents of con­scious­ness ac­ti­vate neu­rons stor­ing some­thing like pro­duc­tion rules, which com­pete to fire. The first one to fire gets to ap­ply its pro­duc­tion, chang­ing the con­tents of con­scious­ness.

If I un­der­stand De­heane’s model cor­rectly, he pro­poses to ap­ply the neu­ral mechanisms dis­cussed in the pre­vi­ous sec­tions—such as neu­ron groups which ac­cu­mu­late ev­i­dence to­wards some kind of de­ci­sion—at a slightly lower level. In the be­hav­ioral ex­per­i­ments, there are mechanisms which ac­cu­mu­late ev­i­dence to­wards which par­tic­u­lar phys­i­cal ac­tions to take, but a per­son might still be dis­tracted by un­re­lated thoughts while perform­ing that task. De­haene’s pa­pers look at the kinds of mechanisms choos­ing what thoughts to think. That is, there are ac­cu­mu­la­tor neu­rons which take “ac­tions” to mod­ify the con­tents of con­scious­ness and work­ing mem­ory.

We can think of this as a two-stage pro­cess:

  1. A pro­cess in­volv­ing sub­con­scious “de­ci­sions” about what thoughts to think, and what kind of con­tent to main­tain in con­scious­ness. Ev­i­dence in­di­cat­ing the kind of con­scious con­tent is most suited for the situ­a­tion is in part based on hard­wired pri­ori­ties, and in part stored as­so­ci­a­tions about the kinds of thoughts that pre­vi­ously pro­duced benefi­cial re­sults.

  2. A higher-level pro­cess in­volv­ing de­ci­sions about what phys­i­cal ac­tions to take. While the in­puts to this pro­cess do not nec­es­sar­ily need to go through con­scious­ness, con­sciously per­ceived ev­i­dence has a much higher weight. Thus, the lower-level pro­cess has sig­nifi­cant in­fluence on which ev­i­dence gets to the ac­cu­mu­la­tors on this level.

To be clear, this does not nec­es­sar­ily cor­re­spond to two clearly dis­tinct lev­els: Zylber­berg, De­haene, Roelfsema, & Sig­man (2011) do not talk about there be­ing any lev­els, and they sug­gest that “trig­ger­ing mo­tor ac­tions” is one of the pos­si­ble de­ci­sions in­volved. But their pa­per seems to mostly be fo­cused on ac­tions—or, in their lan­guage, pro­duc­tion rules—which ma­nipu­late the con­tents of con­scious­ness.

There seems to me to be a con­cep­tual differ­ence be­tween the kinds of ac­tions that change the con­tents of con­scious­ness, and the kinds of ac­tions which ac­cu­mu­late ev­i­dence over many items in con­scious­ness (such as iter­a­tive mem­o­ries of snacks). Zylber­berg et al. talk about a “win­ner-take-all race” to trig­ger a pro­duc­tion rule, which to me im­plies that the ev­i­dence ac­cu­mu­lated in fa­vor of each pro­duc­tion rule is cleared each time that the con­tents of con­scious­ness is changed. This is seem­ingly in­com­pat­i­ble with ac­cu­mu­lat­ing ev­i­dence over many con­scious­ness-mo­ments, so pos­tu­lat­ing a two-level dis­tinc­tion be­tween ac­cu­mu­la­tors seems like a straight­for­ward way of re­solv­ing the is­sue.

(As an aside, I am, as De­haene is, treat­ing con­scious­ness and work­ing mem­ory as ba­si­cally syn­ony­mous for the pur­poses of this dis­cus­sion. This is not strictly cor­rect; e.g. there may be items in work­ing mem­ory which are not cur­rently con­scious. How­ever, since it’s gen­er­ally thought that items in work­ing mem­ory need to be ac­tively re­hearsed through con­scious­ness in or­der to avoid be main­tained, I think that this equiv­o­ca­tion is okay for these pur­poses.)

Here’s a con­cep­tual overview of the stages in the “biolog­i­cal Tur­ing ma­chine’s” op­er­a­tion (as Zylber­berg et al. note, a pro­duc­tion firing “is es­sen­tially equiv­a­lent to the ac­tion performed by a Tur­ing ma­chine in a sin­gle step”):

1. The pro­duc­tion se­lec­tion stage

At the be­gin­ning of a cog­ni­tive cy­cle, a per­son’s work­ing mem­ory con­tains a num­ber of differ­ent items, some in­ter­nally gen­er­ated (e.g. mem­o­ries, thoughts) and some ex­ter­nal (e.g. the sight or sound of some­thing in the en­vi­ron­ment). Each item in mem­ory may ac­ti­vate (con­tribute ev­i­dence to) neu­rons which ac­cu­mu­late weight to­wards trig­ger­ing a par­tic­u­lar kind of pro­duc­tion rule. When some ac­cu­mu­la­tor neu­rons reach their de­ci­sion thresh­old, they ap­ply their as­so­ci­ated pro­duc­tion rule.

In the above image, the blue cir­cles at the bot­tom rep­re­sent ac­tive items in work­ing mem­ory. Two items are ac­ti­vat­ing the same group of ac­cu­mu­la­tor neu­rons (shown red) and one is ac­ti­vat­ing an un­re­lated one (shown brown).

2. Pro­duc­tion rule ignition

Once a group of ac­cu­mu­la­tor neu­rons reach their de­ci­sion thresh­old and fire a pro­duc­tion rule, the model sug­gests that there are a num­ber of things that the rule can do. In the above image, an ac­tive rule is mod­ify­ing the con­tents of work­ing mem­ory: tak­ing one of the blue cir­cles, delet­ing it, and cre­at­ing a new blue cir­cle nearby. Hy­po­thet­i­cally, this might be some­thing like tak­ing the men­tal ob­jects hold­ing “120” and “36”, adding them to­gether, and stor­ing the out­put of “156” in mem­ory.

Ob­vi­ously, since we are talk­ing about brains, ex­pres­sions like “writ­ing into mem­ory” or “delet­ing from mem­ory” need to be un­der­stood in some­what differ­ent terms than in com­put­ers; some­thing be­ing “deleted from work­ing mem­ory” mostly just means that a neu­ronal group which was stor­ing the item in its firing pat­tern stops do­ing so.

The au­thors sug­gest that among other things, pro­duc­tion rules can:

  • trig­ger mo­tor ac­tions (e.g. say­ing or do­ing some­thing)

  • change the con­tents of work­ing mem­ory to trig­ger a new pro­cess­ing step (e.g. sav­ing the in­ter­me­di­ate stage of an ar­ith­metic op­er­a­tion, to­gether with the in­ten­tion to pro­ceed with the next step)

  • ac­ti­vate and broad­cast in­for­ma­tion that is in a “la­tent” state (e.g. re­triev­ing a mem­ory and send­ing it to con­scious­ness)

  • ac­ti­vate periph­eral pro­ces­sors ca­pa­ble of perform­ing spe­cific func­tions (e.g. chang­ing the fo­cus of at­ten­tion)

3. New pro­duc­tion selection

After the win­ning pro­duc­tion rule has been ap­plied, the pro­duc­tion se­lec­tion phase be­gins anew. At this stage or a fu­ture one, some kind of a credit as­sign­ment pro­cess likely mod­ifies the de­ci­sion weights in­volved in choos­ing pro­duc­tion rules: if a par­tic­u­lar rule was ac­ti­vated in par­tic­u­lar cir­cum­stances and seemed to pro­duce pos­i­tive con­se­quences, then the con­nec­tions which caused those cir­cum­stances to be con­sid­ered ev­i­dence for that rule are strength­ened.

Prac­ti­cal relevance

Okay, so why do we care? What is the prac­ti­cal rele­vance of this model?

First, this helps make some of my pre­vi­ous posts more con­crete. In Build­ing up to an In­ter­nal Fam­ily Sys­tems model, I pro­posed some sort of a pro­cess where differ­ent sub­agents were com­pet­ing to change the con­tents of con­scious­ness. For in­stance, “man­ager” sub­agents might be try­ing to ma­nipu­late the con­tents of con­scious­ness so as to avoid un­pleas­ant thoughts and to keep the per­son out of dan­ger­ous cir­cum­stances.

Peo­ple who do IFS, or other kinds of “parts work”, will no­tice that differ­ent sub­agents are as­so­ci­ated with differ­ent kinds of bod­ily sen­sa­tions and fla­vors of con­scious­ness. A pri­ori, there shouldn’t be any par­tic­u­lar rea­son for this… ex­cept, per­haps, if the strength of such sen­sa­tions cor­re­lated with the ac­ti­va­tion of a par­tic­u­lar sub­agent, with those sen­sa­tions then be­ing in­ter­nally used for credit as­sign­ment to iden­tify and re­ward sub­agents which had been ac­tive in a given cog­ni­tive cy­cle. (This is mostly pure spec­u­la­tion, but sup­ported by some ob­ser­va­tions to which I hope to re­turn in a fu­ture post.)

In my origi­nal post, I mostly talked about ex­iles—neu­ral pat­terns blocked from con­scious­ness by other sub­agents—as be­ing sub­agents re­lated to a painful mem­ory. But while it is not em­pha­sized as much, IFS holds that other sub­agents can in prin­ci­ple be ex­iled too. For ex­am­ple, a sub­agent which tends to re­act us­ing anger may fre­quently lead to harm­ful con­se­quences, and then be blocked by other sub­agents. This can eas­ily be mod­eled us­ing the neu­ral Tur­ing ma­chine frame­work: over time, the sys­tem learns de­ci­sions which mod­ify con­scious­ness so as to pre­vent the ac­ti­va­tion of a pro­duc­tion rule that gives power to the an­gry sub­agent. As this helps avoid harm­ful con­se­quences, this be­gins to hap­pen more and more of­ten.

Hazard has a nice re­cent post of this kind of a thing hap­pen­ing with emo­tions in gen­eral:

So young me is up­set that the grub mas­ter for our camp­ing trip for­got half the food on the menu, and all we have for break­fast is milk. I couldn’t “fix it” given that we were in the woods, so my next op­tion was “stop feel­ing up­set about it.” So I reached around in the dark of my mind, and Oops, the “healthily pro­cess feel­ings” lever is right next to the “stop listen­ing to my emo­tions” lever.
The end re­sult? “Wow, I de­cided to stop feel­ing up­set, and then I stopped feel­ing up­set. I’m so fuck­ing good at emo­tional reg­u­la­tion!!!!!”
My model now is that I sub­sti­tuted “is there a monologue of up­set­ness in my con­scious men­tal loop?” for “am I feel­ing up­set?”. So from my per­spec­tive, it just felt like I was very in con­trol of my feel­ings. When­ever I wanted to stop feel­ing some­thing, I could. When I thought of ig­nor­ing/​re­press­ing emo­tions, I imag­ined try­ing to cover up some­thing that was there, maybe with a story. Or I thought if you poked around ig­nored emo­tions there would be a re­sponse of anger or an­noy­ance. I at least ex­pected that if I was ig­nor­ing my emo­tions, that if I got very calm and then asked my­self, “Is there any­thing that you’re feel­ing?” I would get an an­swer.
Again, the as­sump­tion was, “If it’s in my mind, I should be able to no­tice if I look.” This ig­nored what was ac­tu­ally hap­pen­ing, which was that I was cut­ting the phone lines so my emo­tions couldn’t talk to me in the first place.

Feel­ing up­set feels bad, ceas­ing to feel up­set feels good. Brain no­tices that there is some op­er­a­tion which causes the feel­ing of up­set to dis­ap­pear from con­scious­ness: car­ry­ing out this op­er­a­tion also pro­duces a feel­ing of satis­fac­tion in the form of “yay, I’m good at emo­tional reg­u­la­tion!”. As a re­sult of be­ing re­warded, it even­tu­ally be­comes so au­to­matic as to block even hints of un­de­sired emo­tions, mak­ing the block in ques­tion im­pos­si­ble to no­tice.

Another ob­ser­va­tion is that in IFS as well as in In­ter­nal Dou­ble Crux, an im­por­tant men­tal move seems to be “giv­ing sub­agents a chance to finish talk­ing”. For in­stance, sub­agent A might hold a con­sid­er­a­tion point­ing in a par­tic­u­lar di­rec­tion, while sub­agent B holds a con­sid­er­a­tion in the op­po­site di­rec­tion. When A starts pre­sent­ing its points, B in­ter­rupts with its own point; in re­sponse, A in­ter­rupts with its point. It seems to be pos­si­ble to com­mit to not tak­ing a de­ci­sion be­fore hav­ing heard both sub­agents, and hav­ing done that, ask them to take turns pre­sent­ing their points and not in­ter­rupt each other. What ex­actly is go­ing on here?

Sup­pose that a per­son is con­tem­plat­ing the de­ci­sion, “should I trust my friend to have my back in a par­tic­u­lar risky ven­ture”. Subagent A holds the con­sid­er­a­tion “al­lies are im­por­tant, and we don’t have any, we should re­ally trust our friend so that we would have more al­lies”. Subagent B holds the con­sid­er­a­tion “be­ing be­trayed would be re­ally bad, and our friend seems un­trust­wor­thy, it’s im­por­tant that we don’t sign up for this”. Subagent A con­sid­ers it re­ally im­por­tant to go on this ven­ture to­gether; sub­agent B con­sid­ers it re­ally im­por­tant not to.

Re­call that hu­man de­ci­sion-mak­ing hap­pens by ac­cu­mu­lat­ing ev­i­dence to­wards differ­ent choices, un­til a de­ci­sion thresh­old is met. If A were al­lowed to pre­sent its ev­i­dence in fa­vor of sign­ing up on the ven­ture, that might sway the de­ci­sion over the thresh­old be­fore B was able to pre­sent the ev­i­dence against. Thus, there is a mechanism which al­lows B to “in­ter­rupt” A, in or­der to pre­sent its own ev­i­dence. Un­for­tu­nately, it is now A which risks B’s ev­i­dence be­ing suffi­cient to meet a de­ci­sion thresh­old pre­ma­turely un­less B is pre­vented from pre­sent­ing its ev­i­dence, so A must in­ter­rupt.

Sub­jec­tively, this is ex­pe­rienced as in­tense in­ter­nal con­flict, with two ex­treme con­sid­er­a­tions push­ing in op­po­site di­rec­tions, al­low­ing no de­ci­sion to be made—un­less there is a plau­si­ble com­mit­ment to not mak­ing a de­ci­sion un­til both have been heard out. (To me, this feels like my at­ten­tion be­ing caught in a tug-of-war be­tween one set of con­sid­er­a­tions ver­sus an­other. Roberts & Hutch­er­son (2019) note that A large body of work sug­gests that nega­tive in­for­ma­tion draws fo­cus through rapid de­tec­tion [64–68] and at­ten­tional cap­ture [69–71]. [....] Sev­eral stud­ies now show that at­tend­ing to a choice al­ter­na­tive or at­tribute in­creases its weight­ing in the ev­i­dence ac­cu­mu­la­tion pro­cess [72–75]. To the ex­tent that nega­tive af­fect draws at­ten­tion to a choice-rele­vant at­tribute or ob­ject, it should thus in­crease the weight it re­ceives.)

There’s one more im­por­tant con­sid­er­a­tion. Eliezer has writ­ten about cached thoughts—be­liefs which we have once ac­quired, then never re-eval­u­ated and just acted on them from that on­wards. But this model sug­gests that things may be worse: it’s not just that we are run­ning on cached thoughts. In­stead, even the pre-con­scious mechanisms de­cid­ing which thoughts are worth re-eval­u­at­ing are run­ning on cached val­ues.

Some­times ex­ter­nal ev­i­dence may be suffi­cient to force an up­date, but there can also be self-fulfilling blind spots. For in­stance, you may note that nega­tive emo­tions never even sur­face into your con­scious­ness. This ob­ser­va­tion then trig­gers a sense of satis­fac­tion about be­ing good at emo­tional reg­u­la­tion, so that thoughts about al­ter­na­tive—and less pleas­ant—hy­pothe­ses are never se­lected for con­sid­er­a­tion. In fact, ev­i­dence to the con­trary may feel ac­tively un­pleas­ant to con­sider, trig­ger­ing sub­agents which use feel­ings such as an­noy­ance—or if an­noy­ance would be too sus­pi­cious, just plain in­differ­ence—to push that ev­i­dence out of con­scious­ness, be­fore it can con­tribute to a de­ci­sion.

And the older those flawed as­sump­tions are, the more time there is for ad­di­tional struc­tures to build on top of them.

This post is part of re­search funded by the Foun­da­tional Re­search In­sti­tute. Thanks to Maija Haav­isto for line edit­ing an ini­tial draft.

References

De­haene, S. (2014). Con­scious­ness and the Brain: De­ci­pher­ing How the Brain Codes Our Thoughts. New York, New York: Vik­ing.

De­haene, S., & Sig­man, M. (2012). From a sin­gle de­ci­sion to a multi-step al­gorithm. Cur­rent Opinion in Neu­ro­biol­ogy, 22(6), 937–945.

Den­nett, D. C. (1991). Con­scious­ness Ex­plained (1st edi­tion). Bos­ton: Lit­tle Brown & Co.

Forstmann, B. U., Rat­cliff, R., & Wa­gen­mak­ers, E.-J. (2016). Se­quen­tial Sam­pling Models in Cog­ni­tive Neu­ro­science: Ad­van­tages, Ap­pli­ca­tions, and Ex­ten­sions. An­nual Re­view of Psy­chol­ogy, 67, 641–666.

Rat­cliff, R., Smith, P. L., Brown, S. D., & McKoon, G. (2016). Diffu­sion De­ci­sion Model: Cur­rent Is­sues and His­tory. Trends in Cog­ni­tive Sciences, 20(4), 260–281.

Roberts, I. D., & Hutch­er­son, C. A. (2019). Affect and De­ci­sion Mak­ing: In­sights and Pre­dic­tions from Com­pu­ta­tional Models. Trends in Cog­ni­tive Sciences, 23(7), 602–614.

Shadlen, M. N., & Sho­hamy, D. (2016). De­ci­sion Mak­ing and Se­quen­tial Sam­pling from Me­mory. Neu­ron, 90(5), 927–939.

Zylber­berg, A., De­haene, S., Roelfsema, P. R., & Sig­man, M. (2011). The hu­man Tur­ing ma­chine: a neu­ral frame­work for men­tal pro­grams. Trends in Cog­ni­tive Sciences, 15(7), 293–300.