My computational framework for the brain

By now I’ve writ­ten a bunch of blog posts on brain ar­chi­tec­ture and al­gorithms, not in any par­tic­u­lar or­der and gen­er­ally in­ter­spersed with long di­gres­sions into Ar­tifi­cial Gen­eral In­tel­li­gence. Here I want to sum­ma­rize my key ideas in one place, to cre­ate a slightly bet­ter en­try point, and some­thing I can re­fer back to in cer­tain fu­ture posts that I’m plan­ning. If you’ve read ev­ery sin­gle one of my pre­vi­ous posts (hi mom!), there’s not much new here.

In this post, I’m try­ing to paint a pic­ture. I’m not re­ally try­ing to jus­tify it, let alone prove it. The jus­tifi­ca­tion ul­ti­mately has to be: All the pieces are biolog­i­cally, com­pu­ta­tion­ally, and evolu­tion­ar­ily plau­si­ble, and the pieces work to­gether to ex­plain ab­solutely ev­ery­thing known about hu­man psy­chol­ogy and neu­ro­science. (I be­lieve it! Try me!) Need­less to say, I could be wrong in both the big pic­ture and the de­tails (or miss­ing big things). If so, writ­ing this out will hope­fully make my wrong­ness eas­ier to dis­cover!

Pretty much ev­ery­thing I say here and its op­po­site can be found in the cog­ni­tive neu­ro­science liter­a­ture. (It’s a con­tro­ver­sial field!) I make no pre­tense to origi­nal­ity (with one ex­cep­tion noted be­low), but can’t be both­ered to put in ac­tual refer­ences. My pre­vi­ous posts have a bit more back­ground, or just ask me if you’re in­ter­ested. :-P

So let’s start in on the 7 guid­ing prin­ci­ples for how I think about the brain:

1. Two sub­sys­tems: “Neo­cor­tex” and “Sub­cor­tex”

This is the start­ing point. I think it’s ab­solutely crit­i­cal. The brain con­sists of two sub­sys­tems. The neo­cor­tex is the home of “hu­man in­tel­li­gence” as we would rec­og­nize it—our be­liefs, goals, abil­ity to plan and learn and un­der­stand, ev­ery as­pect of our con­scious aware­ness, etc. etc. (All mam­mals have a neo­cor­tex; birds and lizards have an ho­molo­gous and func­tion­ally-equiv­a­lent struc­ture called the “pal­lium”.) Some other parts of the brain (hip­pocam­pus, parts of the tha­la­mus and basal gan­glia) help the neo­cor­tex do its calcu­la­tions, and I lump them into the neo­cor­tex sub­sys­tem. I’ll use the term sub­cor­tex for the rest of the brain (mid­brain, amyg­dala, etc.).

  • Aside: Is this the triune brain the­ory? No. Triune brain the­ory is, from what I gather, a col­lec­tion of ideas about brain evolu­tion and func­tion, most of which are wrong. One as­pect of triune brain the­ory is putting a lot of em­pha­sis on the dis­tinc­tion be­tween neo­cor­ti­cal calcu­la­tions and sub­cor­ti­cal calcu­la­tions. I like that part. I’m keep­ing that part, and I’m im­prov­ing it by ex­pand­ing the neo­cor­tex club to also in­clude the tha­la­mus, hip­pocam­pus, lizard pal­lium, etc., and then I’m ig­nor­ing ev­ery­thing else about triune brain the­ory.

2. Cor­ti­cal uniformity

I claim that the neo­cor­tex is, to a first ap­prox­i­ma­tion, ar­chi­tec­turally uniform, i.e. all parts of it are run­ning the same generic learn­ing al­gorithm in a mas­sively-par­allelized way.

The two caveats to cor­ti­cal unifor­mity (spel­led out in more de­tail at that link) are:

  • There are sorta “hy­per­pa­ram­e­ters” on the generic learn­ing al­gorithm which are set differ­ently in differ­ent parts of the neo­cor­tex—for ex­am­ple, differ­ent re­gions have differ­ent den­si­ties of each neu­ron type, differ­ent thresh­olds for mak­ing new con­nec­tions (which also de­pend on age), etc. This is not at all sur­pris­ing; all learn­ing al­gorithms in­evitably have trade­offs whose op­ti­mal set­tings de­pend on the do­main that they’re learn­ing (no free lunch).

    • As one of many ex­am­ples of how even “generic” learn­ing al­gorithms benefit from do­main-spe­cific hy­per­pa­ram­e­ters, if you’ve seen a pat­tern “A then B then C” re­cur 10 times in a row, you will start un­con­sciously ex­pect­ing AB to be fol­lowed by C. But “should” you ex­pect AB to be fol­lowed by C af­ter see­ing ABC only 2 times? Or what if you’ve seen the pat­tern ABC re­cur 72 times in a row, but then saw AB(not C) twice? What “should” a learn­ing al­gorithm ex­pect in those cases? The an­swer de­pends on the do­main—how reg­u­lar vs ran­dom are the en­vi­ron­men­tal pat­terns you’re learn­ing? How sta­ble are they over time? The an­swer is pre­sum­ably differ­ent for low-level vi­sual pat­terns vs mo­tor con­trol pat­terns etc.

  • There is a gross wiring di­a­gram hard­coded in the genome—i.e., set of con­nec­tions be­tween differ­ent neo­cor­ti­cal re­gions and each other, and other parts of the brain. Th­ese con­nec­tions later get re­fined and ed­ited dur­ing learn­ing. Th­ese speed the learn­ing pro­cess by bring­ing to­gether in­for­ma­tion streams with learn­able re­la­tion­ships—for ex­am­ple the wiring di­a­gram seeds strong con­nec­tions be­tween toe-re­lated mo­tor out­put ar­eas and toe-re­lated pro­pri­o­cep­tive (body po­si­tion sense) in­put ar­eas. We can learn re­la­tions be­tween in­for­ma­tion streams with­out any help from the in­nate wiring di­a­gram, by rout­ing in­for­ma­tion around the cor­tex in more con­voluted ways—see the Ian Water­man ex­am­ple here—but it’s slower, and may con­sume con­scious at­ten­tion.

3. Blank-slate neocortex

(...But not blank-slate sub­cor­tex! More on that be­low.)

I claim that the neo­cor­tex starts out as a “blank slate”: Just like an ML model with ran­dom weights, the neo­cor­tex can­not make any cor­rect pre­dic­tions or do any­thing use­ful un­til it learns to do so from pre­vi­ous in­puts, out­puts, and re­wards.

(By the way, I am not say­ing that the neo­cor­tex’s al­gorithm is similar to to­day’s ML al­gorithms. There’s more than one blank-slate learn­ing al­gorithm! See image.)

A “blank slate” learn­ing al­gorithm, as I’m us­ing the term, is one that learns in­for­ma­tion “from scratch”—an ex­am­ple would be a Ma­chine Learn­ing model that starts with ran­dom weights and then pro­ceeds with gra­di­ent de­scent. When you imag­ine it, you should not imag­ine an empty void that gets filled with data. You should imag­ine a ma­chine that learns more and bet­ter pat­terns over time, and writes those pat­terns into a mem­ory bank—and “blank slate” just means that the mem­ory bank starts out empty. There are many such ma­chines, and they will learn differ­ent pat­terns and there­fore do differ­ent things. See next sec­tion, and see also the dis­cus­sion of hy­per­pa­ram­e­ters in the pre­vi­ous sec­tion.

Why do I think that the neo­cor­tex starts from a blank slate? Two types of rea­sons:

  • De­tails of how I think the neo­cor­ti­cal al­gorithm works: This is the main rea­son for me.

    • For ex­am­ple, as I men­tioned here, there’s a the­ory I like that says that all feed­for­ward sig­nals (I’ll define that in the next sec­tion) in the neo­cor­tex—which in­cludes all sig­nals com­ing into the neo­cor­tex from the out­side it, plus many cor­tex-to-cor­tex sig­nals—are re-en­coded into the data for­mat that the neo­cor­tex can best pro­cess—i.e. a set of sparse codes, with low over­lap, uniform dis­tri­bu­tion, and some other nice prop­er­ties—and this re-en­cod­ing is done by a pseu­do­ran­dom pro­cess! If that’s right, it would seem to cat­e­gor­i­cally rule out any­thing but a blank-slate start­ing point.

    • More broadly, we know the al­gorithm can learn new con­cepts, and new re­la­tion­ships be­tween con­cepts, with­out hav­ing any of those con­cepts baked in by evolu­tion—e.g. learn­ing about rocket en­g­ine com­po­nents. So why not con­sider the pos­si­bil­ity that that’s all it does, from the very be­gin­ning? I can see vaguely how that would work, why that would be biolog­i­cally plau­si­ble and evolu­tion­ar­ily adap­tive, and I can’t cur­rently see any other way that the al­gorithm can work.

  • Ab­sence of ev­i­dence to the con­trary: I have a post Hu­man In­stincts, Sym­bol Ground­ing, and the Blank-Slate Neo­cor­tex where I went through a list of uni­ver­sal hu­man in­stincts, and didn’t see any­thing in­con­sis­tent with a blank-slate neo­cor­tex. The sub­cor­tex—which is ab­solutely not a blank slate—plays a big role in most of those. (More on this in a later sec­tion.) Like­wise I’ve read about the ca­pa­bil­ities of new­born hu­mans and other an­i­mals, and still don’t see any prob­lem. I ac­cept all challenges; try me!

4. What is the neo­cor­ti­cal al­gorithm?

4.1. “Anal­y­sis by syn­the­sis” + “Plan­ning by prob­a­bil­is­tic in­fer­ence”

“Anal­y­sis by syn­the­sis” means that the neo­cor­tex searches through a space of gen­er­a­tive mod­els for a model that pre­dicts its up­com­ing in­puts (both ex­ter­nal in­puts, like vi­sion, and in­ter­nal in­puts, like pro­pri­o­cep­tion and re­ward). “Plan­ning by prob­a­bil­is­tic in­fer­ence” (term from here) means that we treat our own ac­tions as prob­a­bil­is­tic vari­ables to be mod­eled, just like ev­ery­thing else. In other words, the neo­cor­tex’s out­put lines (mo­tor out­puts, hor­mone out­puts, etc.) are the same type of sig­nal as any gen­er­a­tive model pre­dic­tion, and pro­cessed in the same way.

Here’s how those come to­gether. As dis­cussed in Pre­dic­tive Cod­ing = RL + SL + Bayes + MPC, and shown in this figure be­low:

  • The neo­cor­tex fa­vors gen­er­a­tive mod­els that have been mak­ing cor­rect pre­dic­tions, and dis­cards gen­er­a­tive mod­els that have been mak­ing pre­dic­tions that are con­tra­dicted by in­put data (or by other fa­vored gen­er­a­tive mod­els).

  • And, the neo­cor­tex fa­vors gen­er­a­tive mod­els which pre­dict larger fu­ture re­ward, and dis­cards gen­er­a­tive mod­els that pre­dict smaller (or more nega­tive) fu­ture re­ward.

This com­bi­na­tion al­lows both good epistemics (ever-bet­ter un­der­stand­ing of the world), and good strat­egy (plan­ning to­wards goals) in the same al­gorithm. This com­bi­na­tion also has some epistemic and strate­gic failure modes—e.g. a propen­sity to wish­ful think­ing—but in a way that seems com­pat­i­ble with hu­man psy­chol­ogy & be­hav­ior, which is like­wise not perfectly op­ti­mal, if you haven’t no­ticed. Again, see the link above for fur­ther dis­cus­sion.

Cri­te­ria by which gen­er­a­tive mod­els rise to promi­nence in the neo­cor­tex; see Pre­dic­tive Cod­ing = RL + SL + Bayes + MPC for de­tailed dis­cus­sion.
  • Aside: Is this the same as Pre­dic­tive Cod­ing /​ Free-En­ergy Prin­ci­ple? Sorta. I’ve read a fair amount of “main­stream” pre­dic­tive cod­ing (Karl Fris­ton, Andy Clark, etc.), and there are a few things about it that I like, in­clud­ing the em­pha­sis on gen­er­a­tive mod­els pre­dict­ing up­com­ing in­puts, and the idea of treat­ing neo­cor­ti­cal out­puts as just an­other kind of gen­er­a­tive model pre­dic­tion. It also has a lot of other stuff that I dis­agree with (or don’t un­der­stand). My ac­count differs from theirs mainly by (1) em­pha­siz­ing mul­ti­ple si­mul­ta­neous gen­er­a­tive mod­els that com­pete & co­op­er­ate (cf. “so­ciety of mind”, mul­ti­a­gent mod­els of mind, etc.), rather than “a” (sin­gu­lar) prior, and (2) re­strict­ing dis­cus­sion to the neo­cor­tex sub­sys­tem, rather than try­ing to ex­plain the brain as a whole. In both cases, this may be partly a differ­ence of em­pha­sis & in­tu­itions, rather than fun­da­men­tal. But I think the core differ­ence is that pre­dic­tive cod­ing /​ FEP takes some pro­cesses to be foun­da­tional prin­ci­ples, whereas I think that those same things do hap­pen, but that they’re emer­gent be­hav­iors that come out of the al­gorithm un­der cer­tain con­di­tions. For ex­am­ple, in Pre­dic­tive Cod­ing & Mo­tor Con­trol I talk about the pre­dic­tive-cod­ing story that pro­pri­o­cep­tive pre­dic­tions are liter­ally ex­actly the same as mo­tor out­puts. Well, I don’t think they’re ex­actly the same. But I do think that pro­pri­o­cep­tive pre­dic­tions and mo­tor out­puts are the same in some cases (but not oth­ers), in some parts of the neo­cor­tex (but not oth­ers), and af­ter (but not be­fore) the learn­ing al­gorithm has been run­ning a while. So I kinda wind up in a similar place as pre­dic­tive cod­ing, in some re­spects.

4.2. Com­po­si­tional gen­er­a­tive models

Each of the gen­er­a­tive mod­els con­sists of pre­dic­tions that other gen­er­a­tive mod­els are on or off, and/​or pre­dic­tions that in­put chan­nels (com­ing from out­side the neo­cor­tex—vi­sion, hunger, re­ward, etc.) are on or off. (“It’s sym­bols all the way down.”) All the pre­dic­tions are at­tached to con­fi­dence val­ues, and both the pre­dic­tions and con­fi­dence val­ues are, in gen­eral, func­tions of time (or of other pa­ram­e­ters—I’m gloss­ing over some de­tails). The gen­er­a­tive mod­els are com­po­si­tional, be­cause if two of them make dis­joint and/​or con­sis­tent pre­dic­tions, you can cre­ate a new model that sim­ply pre­dicts that both of those two com­po­nent mod­els are ac­tive si­mul­ta­neously. For ex­am­ple, we can snap to­gether a “pur­ple” gen­er­a­tive model and a “jar” gen­er­a­tive model to get a “pur­ple jar” gen­er­a­tive model. They are also com­po­si­tional in other ways—for ex­am­ple, you can time-se­quence them, by mak­ing a gen­er­a­tive model that says “Gen­er­a­tive model X hap­pens and then Gen­er­a­tive model Y hap­pens”.

PGM-type mes­sage-pass­ing: Among other things, the search pro­cess for the best set of si­mul­ta­neously-ac­tive gen­er­a­tive model in­volves some­thing at least vaguely analo­gous to mes­sage-pass­ing (be­lief prop­a­ga­tion) in a prob­a­bil­is­tic graph­i­cal model. Dileep Ge­orge’s vi­sion model is a well-fleshed-out ex­am­ple.

Hier­ar­chies are part of the story but not ev­ery­thing: Hier­ar­chies are a spe­cial case of com­po­si­tional gen­er­a­tive mod­els. A gen­er­a­tive model for an image of “8” makes strong pre­dic­tions that there are two “cir­cle” gen­er­a­tive mod­els po­si­tioned on top of each other. The “cir­cle” gen­er­a­tive model, in turn, makes strong pre­dic­tions that cer­tain con­tours and tex­tures are pre­sent in the vi­sual in­put stream.

How­ever, not all re­la­tions are hi­er­ar­chi­cal. The “is-a-bird” model makes a medium-strength pre­dic­tion that the “is-fly­ing” model is ac­tive, and the “is-fly­ing” model makes a medium-strength pre­dic­tion that the “is-a-bird” model is ac­tive. Nei­ther is hi­er­ar­chi­cally above the other.

As an­other ex­am­ple, the brain has a vi­sual pro­cess­ing hi­er­ar­chy, but as I un­der­stand it, stud­ies show that the brain has loads of con­nec­tions that don’t re­spect the hi­er­ar­chy.

Feed­for­ward and feed­back sig­nals: There are two im­por­tant types of sig­nals in the neo­cor­tex.

A “feed­back” sig­nal is a gen­er­a­tive model pre­dic­tion, at­tached to a con­fi­dence level, which in­cludes all the fol­low­ing:

  • “I pre­dict that neo­cor­ti­cal in­put line #2433 will be ac­tive, with prob­a­bil­ity 0.6”.

  • “I pre­dict that gen­er­a­tive model #95738 will be ac­tive, with prob­a­bil­ity 0.4”.

  • “I pre­dict that neo­cor­ti­cal out­put line #185492 will be ac­tive, with prob­a­bil­ity 0.98”—and this one is a self-fulfilling prophecy, as the feed­back sig­nal is also the out­put line!

A “feed­for­ward” sig­nal is an an­nounce­ment that a cer­tain sig­nal is, in fact, ac­tive right now, which in­cludes all the fol­low­ing:

  • “Neo­cor­ti­cal in­put line #2433 is cur­rently ac­tive!”

  • “Gen­er­a­tive model #95738 is cur­rently ac­tive!”

There are about 10× more feed­back con­nec­tions than feed­for­ward con­nec­tions in the neo­cor­tex, I guess for al­gorith­mic rea­sons I don’t cur­rently un­der­stand.

In a hi­er­ar­chy, the top-down sig­nals are feed­back, and the bot­tom-up sig­nals are feed­for­ward.

The ter­minol­ogy here is a bit un­for­tu­nate. In a mo­tor out­put hi­er­ar­chy, we think of in­for­ma­tion flow­ing “for­ward” from high-level mo­tion plan to low-level mus­cle con­trol sig­nals, but that’s the feed­back di­rec­tion. The for­ward/​back ter­minol­ogy works bet­ter for sen­sory in­put hi­er­ar­chies. Some peo­ple say “top-down” and “bot­tom-up” in­stead of “feed­back” and “feed­for­ward” re­spec­tively, which is nice and in­tu­itive for both in­put and out­put hi­er­ar­chies. But then that ter­minol­ogy gets con­fus­ing when we talk about non-hi­er­ar­chi­cal con­nec­tions. Oh well.

(I’ll also note here that “main­stream” pre­dic­tive cod­ing dis­cus­sions some­times talk about feed­back sig­nals be­ing as­so­ci­ated with con­fi­dence in­ter­vals for ana­log feed­for­ward sig­nals, rather than con­fi­dence lev­els for bi­nary feed­for­ward sig­nals. I changed it on pur­pose. I like my ver­sion bet­ter.)

5. The sub­cor­tex steers the neo­cor­tex to­wards biolog­i­cally-adap­tive be­hav­iors.

The blank-slate neo­cor­tex can learn to pre­dict in­put pat­terns, but it needs guidance to do biolog­i­cally adap­tive things. So one of the jobs of the sub­cor­tex is to try to “steer” the neo­cor­tex, and the sub­cor­tex’s main tool for this task is its abil­ity to send re­wards to the neo­cor­tex at the ap­pro­pri­ate times. Every­thing that hu­mans re­li­ably and adap­tively do with their in­tel­li­gence, from lik­ing food to mak­ing friends, de­pends on the var­i­ous re­ward-de­ter­min­ing calcu­la­tions hard­wired into the sub­cor­tex.

6. The neo­cor­tex is a black box from the per­spec­tive of the sub­cor­tex. So steer­ing the neo­cor­tex is tricky!

Only the neo­cor­tex sub­sys­tem has an in­tel­li­gent world-model. Imag­ine you just lost a big bet, and now you can’t pay back your debt to the loan shark. That’s bad. The sub­cor­tex needs to send nega­tive re­wards to the neo­cor­tex. But how can it know? How can the sub­cor­tex have any idea what’s go­ing on? It has no con­cept of a “bet”, or “debt”, or “pay­ment” or “loan shark”.

This is a very gen­eral prob­lem. I think there are two ba­sic in­gre­di­ents in the solu­tion.

Here’s a di­a­gram to re­fer to, based on the one I put in In­ner Align­ment in the Brain:

Schematic illus­tra­tion of some as­pects of the re­la­tion­ship be­tween sub­cor­tex & neo­cor­tex. See also my pre­vi­ous post In­ner Align­ment in the Brain for more on this.

6.1 The sub­cor­tex can learn what’s go­ing on in the world via its own, par­allel, sen­sory-pro­cess­ing sys­tem.

Thus, for ex­am­ple, we have the well-known vi­sual pro­cess­ing sys­tem in our vi­sual cor­tex, and we have the lesser-known vi­sual pro­cess­ing sys­tem in our mid­brain (su­pe­rior col­licu­lus). Ditto for touch, smell, pro­pri­o­cep­tion, no­ci­cep­tion, etc.

While they have similar in­puts, these two sen­sory pro­cess­ing sys­tems could not be more differ­ent!! The neo­cor­tex fits its in­puts into a huge, open-ended pre­dic­tive world-model, but the sub­cor­tex in­stead has a small and hard­wired “on­tol­ogy” con­sist­ing of evolu­tion­ar­ily-rele­vant in­puts that it can rec­og­nize like faces, hu­man speech sounds, spi­ders, snakes, look­ing down from a great height, var­i­ous tastes and smells, stim­uli that call for flinch­ing, stim­uli that one should ori­ent to­wards, etc. etc., and these hard­wired recog­ni­tion cir­cuits are con­nected to hard­wired re­sponses.

For ex­am­ple, ba­bies learn to rec­og­nize faces quickly and re­li­ably in part be­cause the mid­brain sen­sory pro­cess­ing sys­tem knows what a face looks like, and when it sees one, it will sac­cade to it, and thus the neo­cor­tex will spend dis­pro­por­tionate time build­ing pre­dic­tive mod­els of faces.

...Or bet­ter yet, in­stead of sac­cad­ing to faces it­self, the sub­cor­tex can re­ward the neo­cor­tex each time it de­tects that it is look­ing at a face! Then the neo­cor­tex will go off look­ing for faces, us­ing its neo­cor­tex-su­per­pow­ers to learn ar­bi­trary pat­terns of sen­sory in­puts and mo­tor out­puts that tend to re­sult in look­ing at peo­ple’s faces.

6.2 The sub­cor­tex can see the neo­cor­tex’s out­puts—which in­clude not only pre­dic­tion but imag­i­na­tion, mem­ory, and em­pa­thetic simu­la­tions of other peo­ple.

For ex­am­ple, if the neo­cor­tex never pre­dicts or imag­ines any re­ward, then the sub­cor­tex can guess that the neo­cor­tex has a grim as­sess­ment of its prospects for the fu­ture—I’ll dis­cuss that par­tic­u­lar ex­am­ple much more in an up­com­ing post on de­pres­sion.

To squeeze more in­for­ma­tion out of the neo­cor­tex, the sub­cor­tex can also “teach” the neo­cor­tex to re­veal when it is think­ing of one of the situ­a­tions in the sub­cor­tex’s small hard­wired on­tol­ogy (faces, spi­ders, sweet tastes, etc.—see above). For ex­am­ple, if the sub­cor­tex re­wards the neo­cor­tex for cring­ing in ad­vance of pain, then the neo­cor­tex will learn to fa­vor pain-pre­dic­tion gen­er­a­tive mod­els that also send out cringe-mo­tor-com­mands. And thus, even­tu­ally, it will also start send­ing weak cringe-mo­tor-com­mands when imag­in­ing fu­ture pain, or when em­path­i­cally simu­lat­ing some­one in pain—and the sub­cor­tex can de­tect that, and is­sue hard­wired re­sponses in turn.

See In­ner Align­ment in the Brain for more ex­am­ples & dis­cus­sion of all this stuff about steer­ing.

Un­like most of the other stuff here, I haven’t seen any­thing in the liter­a­ture that takes “how does the sub­cor­tex steer the neo­cor­tex?” to be a prob­lem that needs to be solved, let alone that solves it. (Let me know if you have!) …Whereas I see it as The Most Im­por­tant And Time-Sen­si­tive Prob­lem In All Of Neu­ro­science—be­cause if we build neo­cor­tex-like AI al­gorithms, we will need to know how to steer them to­wards safe and benefi­cial be­hav­iors!

7. The sub­cor­ti­cal al­gorithms re­main largely unknown

I think much less is known about the al­gorithms of the sub­cor­tex (mid­brain, amyg­dala, etc.) than about the al­gorithms of the neo­cor­tex. There are a cou­ple is­sues:

  • The sub­cor­tex’s al­gorithms are more com­pli­cated than the neo­cor­tex’s al­gorithms: As de­scribed above, I think the neo­cor­tex has more-or-less one generic learn­ing al­gorithm. Sure, it con­sists of many in­ter­lock­ing parts, but it has an over­all logic. The sub­cor­tex, by con­trast, has cir­cuitry for de­tect­ing and flinch­ing away from an in­com­ing pro­jec­tile, cir­cuitry for de­tect­ing spi­ders in the vi­sual field, cir­cuitry for (some­how) im­ple­ment­ing lots of differ­ent so­cial in­stincts, etc. etc. I doubt all these things strongly over­lap each other, though I don’t know that for sure. That makes it harder to figure out what’s go­ing on.

    • I don’t think the al­gorithms are “com­pli­cated” in the sense of “mys­te­ri­ous and so­phis­ti­cated”. Un­like the neo­cor­tex, I don’t think these al­gorithms are do­ing any­thing where a ma­chine learn­ing ex­pert couldn’t sit down and im­ple­ment some­thing func­tion­ally equiv­a­lent in PyTorch right now. I think they are com­pli­cated in that they have a com­pli­cated speci­fi­ca­tion (this kind of in­put pro­duces that kind of out­put, and this other kind of in­put pro­duces this other kind of out­put, etc. etc. etc.), and this speci­fi­ca­tion what we need to work out.

  • Fewer peo­ple are work­ing on sub­cor­ti­cal al­gorithms than the neo­cor­tex’s al­gorithms: The neo­cor­tex is the cen­ter of hu­man in­tel­li­gence and cog­ni­tion. So very ex­cit­ing! So very mon­e­ti­z­able! By con­trast, the mid­brain seems far less ex­cit­ing and far less prac­ti­cally use­ful. Also, the neo­cor­tex is near­est the skull, and thus ac­cessible to some ex­per­i­men­tal tech­niques (e.g. EEG, MEG, ECoG) that don’t work on deeper struc­tures. This is es­pe­cially limit­ing when study­ing live hu­mans, I think.

As men­tioned above, I am very un­happy about this state of af­fairs. For the pro­ject of build­ing safe and benefi­cial ar­tifi­cial gen­eral in­tel­li­gence, I feel strongly that it would be bet­ter if we re­verse-en­g­ineered sub­cor­ti­cal al­gorithms first, and neo­cor­ti­cal al­gorithms sec­ond.

Conclusion

Well, my brief sum­mary wasn’t all that brief af­ter all! Con­grat­u­la­tions on mak­ing it this far! I’m very open to ques­tions, dis­cus­sion, and crit­i­cism. I’ve already re­vised my views on all these top­ics nu­mer­ous times, and ex­pect to do so again. :-)