The Brain as a Universal Learning Machine

This ar­ti­cle pre­sents an emerg­ing ar­chi­tec­tural hy­poth­e­sis of the brain as a biolog­i­cal im­ple­men­ta­tion of a Univer­sal Learn­ing Ma­chine. I pre­sent a rough but com­plete ar­chi­tec­tural view of how the brain works un­der the uni­ver­sal learn­ing hy­poth­e­sis. I also con­trast this new view­point—which comes from com­pu­ta­tional neu­ro­science and ma­chine learn­ing—with the older evolved mod­u­lar­ity hy­poth­e­sis pop­u­lar in evolu­tion­ary psy­chol­ogy and the heuris­tics and bi­ases liter­a­ture. Th­ese two con­cep­tions of the brain lead to very differ­ent pre­dic­tions for the likely route to AGI, the value of neu­ro­science, the ex­pected differ­ences be­tween AGI and hu­mans, and thus any con­se­quent safety is­sues and de­pen­dent strate­gies.

Art generated by an artificial neural net

(The image above is from a re­cent mys­te­ri­ous post to r/​ma­chine­learn­ing, prob­a­bly from a Google pro­ject that gen­er­ates art based on a vi­su­al­iza­tion tool used to in­spect the pat­terns learned by con­volu­tional neu­ral net­works. I am es­pe­cially fond of the wierd figures rid­ing the cart in the lower left. )

  1. In­tro: Two view­points on the Mind

  2. Univer­sal Learn­ing Machines

  3. His­tor­i­cal Interlude

  4. Dy­namic Rewiring

  5. Brain Ar­chi­tec­ture (the whole brain in one pic­ture and a few pages of text)

  6. The Basal Ganglia

  7. Im­pli­ca­tions for AGI

  8. Conclusion

In­tro: Two View­points on the Mind

Few dis­cov­er­ies are more ir­ri­tat­ing than those that ex­pose the pedi­gree of ideas.

-- Lord Ac­ton (prob­a­bly)

Less Wrong is a site de­voted to re­fin­ing the art of hu­man ra­tio­nal­ity, where ra­tio­nal­ity is based on an ideal­ized con­cep­tu­al­iza­tion of how minds should or could work. Less Wrong and its found­ing se­quences draws heav­ily on the heuris­tics and bi­ases liter­a­ture in cog­ni­tive psy­chol­ogy and re­lated work in evolu­tion­ary psy­chol­ogy. More speci­fi­cally the se­quences build upon a spe­cific cluster in the space of cog­ni­tive the­o­ries, which can be iden­ti­fied in par­tic­u­lar with the highly in­fluen­tial “evolved mod­u­lar­ity” per­spec­tive of Cos­mides and Tooby.

From Wikipe­dia:

Evolu­tion­ary psy­chol­o­gists pro­pose that the mind is made up of ge­net­i­cally in­fluenced and do­main-spe­cific[3] men­tal al­gorithms or com­pu­ta­tional mod­ules, de­signed to solve spe­cific evolu­tion­ary prob­lems of the past.[4]

From “Evolu­tion­ary Psy­chol­ogy and the Emo­tions”:[5]

An evolu­tion­ary per­spec­tive leads one to view the mind as a crowded zoo of evolved, do­main-spe­cific pro­grams. Each is func­tion­ally spe­cial­ized for solv­ing a differ­ent adap­tive prob­lem that arose dur­ing ho­minid evolu­tion­ary his­tory, such as face recog­ni­tion, for­ag­ing, mate choice, heart rate reg­u­la­tion, sleep man­age­ment, or preda­tor vigilance, and each is ac­ti­vated by a differ­ent set of cues from the en­vi­ron­ment.

If you imag­ine these gen­eral the­o­ries or per­spec­tives on the brain/​mind as points in the­ory space, the evolved mod­u­lar­ity cluster posits that much of the ma­chin­ery of hu­man men­tal al­gorithms is largely in­nate. Gen­eral learn­ing—if it ex­ists at all—ex­ists only in spe­cific mod­ules; in most mod­ules learn­ing is rel­e­gated to the role of adapt­ing ex­ist­ing al­gorithms and ac­quiring data; the im­pact of the in­for­ma­tion en­vi­ron­ment is de-em­pha­sized. In this view the brain is a com­plex messy cludge of evolved mechanisms.

There is an­other view­point cluster, more pop­u­lar in com­pu­ta­tional neu­ro­science (es­pe­cially to­day), that is al­most the ex­act op­po­site of the evolved mod­u­lar­ity hy­poth­e­sis. I will re­brand this view­point the “uni­ver­sal learner” hy­poth­e­sis, aka the “one learn­ing al­gorithm” hy­poth­e­sis (the re­brand­ing is jus­tified mainly by the in­clu­sion of some newer the­o­ries and ev­i­dence for the basal gan­glia as a ‘CPU’ which learns to con­trol the cor­tex). The roots of the uni­ver­sal learn­ing hy­poth­e­sis can be traced back to Mount­cas­tle’s dis­cov­ery of the sim­ple uniform ar­chi­tec­ture of the cor­tex.[6]

The uni­ver­sal learn­ing hy­poth­e­sis pro­poses that all sig­nifi­cant men­tal al­gorithms are learned; noth­ing is in­nate ex­cept for the learn­ing and re­ward ma­chin­ery it­self (which is some­what com­pli­cated, in­volv­ing a num­ber of sys­tems and mechanisms), the ini­tial rough ar­chi­tec­ture (equiv­a­lent to a prior over mindspace), and a small library of sim­ple in­nate cir­cuits (analo­gous to the op­er­at­ing sys­tem layer in a com­puter). In this view the mind (soft­ware) is dis­tinct from the brain (hard­ware). The mind is a com­plex soft­ware sys­tem built out of a gen­eral learn­ing mechanism.

In sim­plifi­ca­tion, the main differ­ence be­tween these view­points is the rel­a­tive quan­tity of do­main spe­cific men­tal al­gorith­mic in­for­ma­tion speci­fied in the genome vs that ac­quired through gen­eral pur­pose learn­ing dur­ing the or­ganism’s life­time. Evolved mod­ules vs learned mod­ules.

When you have two hy­pothe­ses or view­points that are al­most com­plete op­po­sites this is gen­er­ally a sign that the field is in an early state of knowl­edge; fur­ther ex­per­i­ments typ­i­cally are re­quired to re­solve the con­flict.

It has been about 25 years since Cos­mides and Tooby be­gan to pop­u­larize the evolved mod­u­lar­ity hy­poth­e­sis. A num­ber of key neu­ro­science ex­per­i­ments have been performed since then which sup­port the uni­ver­sal learn­ing hy­poth­e­sis (re­viewed later in this ar­ti­cle).

Ad­di­tional in­di­rect sup­port comes from the rapid un­ex­pected suc­cess of Deep Learn­ing[7], which is en­tirely based on build­ing AI sys­tems us­ing sim­ple uni­ver­sal learn­ing al­gorithms (such as Stochas­tic Gra­di­ent Des­cent or other var­i­ous ap­prox­i­mate Bayesian meth­ods[8][9][10][11]) scaled up on fast par­allel hard­ware (GPUs). Deep Learn­ing tech­niques have quickly come to dom­i­nate most of the key AI bench­marks in­clud­ing vi­sion[12], speech recog­ni­tion[13][14], var­i­ous nat­u­ral lan­guage tasks, and now even ATARI [15] - prov­ing that sim­ple ar­chi­tec­tures (pri­ors) com­bined with uni­ver­sal learn­ing is a path (and per­haps the only vi­able path) to AGI. More­over, the in­ter­nal rep­re­sen­ta­tions that de­velop in some deep learn­ing sys­tems are struc­turally and func­tion­ally similar to rep­re­sen­ta­tions in analo­gous re­gions of biolog­i­cal cor­tex[16].

To para­phrase Feyn­man: to truly un­der­stand some­thing you must build it.

In this ar­ti­cle I am go­ing to quickly in­tro­duce the ab­stract con­cept of a uni­ver­sal learn­ing ma­chine, pre­sent an overview of the brain’s ar­chi­tec­ture as a spe­cific type of uni­ver­sal learn­ing ma­chine, and fi­nally I will con­clude with some spec­u­la­tions on the im­pli­ca­tions for the race to AGI and AI safety is­sues in par­tic­u­lar.

Univer­sal Learn­ing Machines

A uni­ver­sal learn­ing ma­chine is a sim­ple and yet very pow­er­ful and gen­eral model for in­tel­li­gent agents. It is an ex­ten­sion of a gen­eral com­puter—such as Tur­ing Ma­chine—am­plified with a uni­ver­sal learn­ing al­gorithm. Do not view this as my ‘big new the­ory’ - it is sim­ply an amalga­ma­tion of a set of re­lated pro­pos­als by var­i­ous re­searchers.

An ini­tial un­trained seed ULM can be defined by 1.) a prior over the space of mod­els (or equiv­a­lently, pro­grams), 2.) an ini­tial util­ity func­tion, and 3.) the uni­ver­sal learn­ing ma­chin­ery/​al­gorithm. The ma­chine is a real-time sys­tem that pro­cesses an in­put sen­sory/​ob­ser­va­tion stream and pro­duces an out­put mo­tor/​ac­tion stream to con­trol the ex­ter­nal world us­ing a learned in­ter­nal pro­gram that is the re­sult of con­tin­u­ous self-op­ti­miza­tion.

There is of course always room to smug­gle in ar­bi­trary in­nate func­tion­al­ity via the prior, but in gen­eral the prior is ex­pected to be ex­tremely small in bits in com­par­i­son to the learned model.

The key defin­ing char­ac­ter­is­tic of a ULM is that it uses its uni­ver­sal learn­ing al­gorithm for con­tin­u­ous re­cur­sive self-im­prove­ment with re­gards to the util­ity func­tion (re­ward sys­tem). We can view this as sec­ond (and higher) or­der op­ti­miza­tion: the ULM op­ti­mizes the ex­ter­nal world (first or­der), and also op­ti­mizes its own in­ter­nal op­ti­miza­tion pro­cess (sec­ond or­der), and so on. Without loss of gen­er­al­ity, any sys­tem ca­pa­ble of com­put­ing a large num­ber of de­ci­sion vari­ables can also com­pute in­ter­nal self-mod­ifi­ca­tion de­ci­sions.

Con­cep­tu­ally the learn­ing ma­chin­ery com­putes a prob­a­bil­ity dis­tri­bu­tion over pro­gram-space that is pro­por­tional to the ex­pected util­ity dis­tri­bu­tion. At each timestep it re­ceives a new sen­sory ob­ser­va­tion and ex­pends some amount of com­pu­ta­tional en­ergy to in­fer an up­dated (ap­prox­i­mate) pos­te­rior dis­tri­bu­tion over its in­ter­nal pro­gram-space: an ap­prox­i­mate ‘Bayesian’ self-im­prove­ment.

The above de­scrip­tion is in­ten­tion­ally vague in the right ways to cover the wide space of pos­si­ble prac­ti­cal im­ple­men­ta­tions and cur­rent un­cer­tainty. You could view AIXI as a par­tic­u­lar for­mal­iza­tion of the above gen­eral prin­ci­ples, al­though it is also as dumb as a rock in any prac­ti­cal sense and has other po­ten­tial the­o­ret­i­cal prob­lems. Although the gen­eral idea is sim­ple enough to con­vey in the ab­stract, one should be­ware of con­cise for­mal de­scrip­tions: prac­ti­cal ULMs are too com­plex to re­duce to a few lines of math.

A ULM in­her­its the gen­eral prop­erty of a Tur­ing Ma­chine that it can com­pute any­thing that is com­putable, given ap­pro­pri­ate re­sources. How­ever a ULM is also more pow­er­ful than a TM. A Tur­ing Ma­chine can only do what it is pro­grammed to do. A ULM au­to­mat­i­cally pro­grams it­self.

If you were to open up an in­fant ULM—a ma­chine with zero ex­pe­rience—you would mainly just see the small ini­tial code for the learn­ing ma­chin­ery. The vast ma­jor­ity of the code­store starts out empty—ini­tial­ized to noise. (In the brain the learn­ing ma­chin­ery is built in at the hard­ware level for max­i­mal effi­ciency).

The­o­ret­i­cal tur­ing ma­chines are all qual­i­ta­tively al­ike, and are all qual­i­ta­tively dis­tinct from any non-uni­ver­sal ma­chine. Like­wise for ULMs. The­o­ret­i­cally a small ULM is just as gen­eral/​ex­pres­sive as a planet-sized ULM. In prac­tice quan­ti­ta­tive dis­tinc­tions do mat­ter, and can be­come effec­tively qual­i­ta­tive.

Just as the sim­plest pos­si­ble Tur­ing Ma­chine is in fact quite sim­ple, the sim­plest pos­si­ble Univer­sal Learn­ing Ma­chine is also prob­a­bly quite sim­ple. A cou­ple of re­cent pro­pos­als for sim­ple uni­ver­sal learn­ing ma­chines in­clude the Neu­ral Tur­ing Ma­chine[16] (from Google Deep­Mind), and Me­mory Net­works[17]. The core of both ap­proaches in­volve train­ing an RNN to learn how to con­trol a mem­ory store through gat­ing op­er­a­tions.

His­tor­i­cal Interlude

At this point you may be skep­ti­cal: how could the brain be any­thing like a uni­ver­sal learner? What about all of the known in­nate bi­ases/​er­rors in hu­man cog­ni­tion? I’ll get to that soon, but let’s start by think­ing of a cou­ple of gen­eral ex­per­i­ments to test the uni­ver­sal learn­ing hy­poth­e­sis vs the evolved mod­u­lar­ity hy­poth­e­sis.

In a world where the ULH is mostly cor­rect, what do we ex­pect to be differ­ent than in wor­lds where the EMH is mostly cor­rect?

One type of ev­i­dence that would sup­port the ULH is the demon­stra­tion of key struc­tures in the brain along with as­so­ci­ated wiring such that the brain can be shown to di­rectly im­ple­ment some ver­sion of a ULM ar­chi­tec­ture.

Another type of in­di­rect ev­i­dence that would help dis­crim­i­nate the two the­o­ries would be ev­i­dence that the brain is ca­pa­ble of gen­eral global op­ti­miza­tion, and that com­plex do­main spe­cific al­gorithms/​cir­cuits mostly re­sult from this pro­cess. If on the other hand the brain is only ca­pa­ble of con­strained/​lo­cal op­ti­miza­tion, then most of the com­plex­ity must in­stead be in­nate—the re­sult of global op­ti­miza­tion in evolu­tion­ary deep­time. So in essence it boils down to the op­ti­miza­tion ca­pa­bil­ity of biolog­i­cal learn­ing vs biolog­i­cal evolu­tion.

From the per­spec­tive of the EMH, it is not suffi­cient to demon­strate that there are things that brains can not learn in prac­tice—be­cause those sim­ply could be quan­ti­ta­tive limi­ta­tions. De­mon­strat­ing that an in­tel 486 can’t com­pute some known com­putable func­tion in our life­times is not proof that the 486 is not a Tur­ing Ma­chine.

Nor is it suffi­cient to demon­strate that bi­ases ex­ist: a ULM is only ‘ra­tio­nal’ to the ex­tent that its ob­ser­va­tional ex­pe­rience and learn­ing ma­chin­ery al­lows (and to the ex­tent one has the cor­rect the­ory of ra­tio­nal­ity). In fact, the ex­is­tence of many (most?) bi­ases in­trin­si­cally de­pends on the EMH—based on the im­plicit as­sump­tion that some cog­ni­tive al­gorithms are in­nate. If brains are mostly ULMs then most cog­ni­tive bi­ases dis­solve, or be­come learn­ing bi­ases—for if all cog­ni­tive al­gorithms are learned, then ev­i­dence for bi­ases is ev­i­dence for cog­ni­tive al­gorithms that peo­ple haven’t had suffi­cient time/​en­ergy/​mo­ti­va­tion to learn. (This does not im­ply that in­trin­sic limi­ta­tions/​bi­ases do not ex­ist or that the study of cog­ni­tive bi­ases is a waste of time; rather the ULH im­plies that ed­u­ca­tional his­tory is what mat­ters most)

The genome can only spec­ify a limited amount of in­for­ma­tion. The ques­tion is then how much of our ad­vanced cog­ni­tive ma­chin­ery for things like fa­cial recog­ni­tion, mo­tor plan­ning, lan­guage, logic, plan­ning, etc. is in­nate vs learned. From evolu­tion’s per­spec­tive there is a huge ad­van­tage to preload­ing the brain with in­nate al­gorithms so long as said al­gorithms have high ex­pected util­ity across the ex­pected do­main land­scape.

On the other hand, evolu­tion is also highly con­strained in a bit cod­ing sense: ev­ery ex­tra bit of code costs ad­di­tional en­ergy for the vast num­ber of cel­lu­lar repli­ca­tion events across the life­time of the or­ganism. Low code com­plex­ity solu­tions also hap­pen to be ex­po­nen­tially eas­ier to find. Th­ese con­sid­er­a­tions seem to strongly fa­vor the ULH but they are difficult to quan­tify.

Neu­ro­scien­tists have long known that the brain is di­vided into phys­i­cal and func­tional mod­ules. Th­ese mod­u­lar sub­di­vi­sions were dis­cov­ered a cen­tury ago by Brod­mann. Every time neu­ro­scien­tists opened up a new brain, they saw the same old cor­ti­cal mod­ules in the same old places do­ing the same old things. The spe­cific lay­out of course varied from species to species, but the vari­a­tions be­tween in­di­vi­d­u­als are minus­cule. This ev­i­dence seems to strongly fa­vor the EMH.

Through­out most of the 90′s up into the 2000′s, ev­i­dence from com­pu­ta­tional neu­ro­science mod­els and AI were heav­ily in­fluenced by—and un­sur­pris­ingly—largely sup­ported the EMH. Neu­ral nets and back­prop were known of course since the 1980′s and worked on small prob­lems[18], but at the time they didn’t scale well—and there was no the­ory to sug­gest they ever would.

The­ory of the time also sug­gested lo­cal min­ima would always be a prob­lem (now we un­der­stand that lo­cal min­ima are not re­ally the main prob­lem[19], and mod­ern stochas­tic gra­di­ent de­scent meth­ods com­bined with highly over­com­plete mod­els and stochas­tic reg­u­lariza­tion[20] are effec­tively global op­ti­miz­ers that can of­ten han­dle ob­sta­cles such as lo­cal min­ima and sad­dle points[21]).

The other re­lated his­tor­i­cal crit­i­cism rests on the lack of biolog­i­cal plau­si­bil­ity for back­prop style gra­di­ent de­scent. (There is as of yet lit­tle con­sen­sus on how the brain im­ple­ments the equiv­a­lent ma­chin­ery, but tar­get prop­a­ga­tion is one of the more promis­ing re­cent pro­pos­als[22][23].)

Many AI re­searchers are nat­u­rally in­ter­ested in the brain, and we can see the in­fluence of the EMH in much of the work be­fore the deep learn­ing era. HMAX is a hi­er­ar­chi­cal vi­sion sys­tem de­vel­oped in the late 90′s by Pog­gio et al as a work­ing model of biolog­i­cal vi­sion[24]. It is based on a pre­con­figured hi­er­ar­chy of mod­ules, each of which has its own mix of in­nate fea­tures such as ga­bor edge de­tec­tors along with a lit­tle bit of lo­cal learn­ing. It im­ple­ments the gen­eral idea that com­plex al­gorithms/​fea­tures are in­nate—the re­sult of evolu­tion­ary global op­ti­miza­tion—while neu­ral net­works (in­ca­pable of global op­ti­miza­tion) use heb­bian lo­cal learn­ing to fill in de­tails of the de­sign.

Dy­namic Rewiring

In a ground­break­ing study from 2000 pub­lished in Na­ture, Sharma et al suc­cess­fully rewired fer­ret reti­nal path­ways to pro­ject into the au­di­tory cor­tex in­stead of the vi­sual cor­tex.[25] The re­sult: au­di­tory cor­tex can be­come vi­sual cor­tex, just by re­ceiv­ing vi­sual data! Not only does the rewired au­di­tory cor­tex de­velop the spe­cific ga­bor fea­tures char­ac­ter­is­tic of vi­sual cor­tex; the rewired cor­tex also be­comes func­tion­ally vi­sual. [26] True, it isn’t quite as effec­tive as nor­mal vi­sual cor­tex, but that could also pos­si­bly be an ar­ti­fact of crude and in­va­sive brain rewiring surgery.

The fer­ret study was pop­u­larized by the book On In­tel­li­gence by Hawk­ins in 2004 as ev­i­dence for a sin­gle cor­ti­cal learn­ing al­gorithm. This helped per­co­late the ev­i­dence into the wider AI com­mu­nity, and thus prob­a­bly helped in set­ting up the stage for the deep learn­ing move­ment of to­day. The mod­ern view of the cor­tex is that of a mostly uniform set of gen­eral pur­pose mod­ules which slowly be­come re­cruited for spe­cific tasks and filled with do­main spe­cific ‘code’ as a re­sult of the learn­ing (self op­ti­miza­tion) pro­cess.

The next key set of ev­i­dence comes from stud­ies of atyp­i­cal hu­man brains with novel ex­trasen­sory pow­ers. In 2009 Vuillerme et al showed that the brain could au­to­mat­i­cally learn to pro­cess sen­sory feed­back ren­dered onto the tongue[27]. This re­search was de­vel­oped into a com­plete de­vice that al­lows blind peo­ple to de­velop prim­i­tive tongue based vi­sion.

In the mod­ern era some blind hu­mans have ap­par­ently ac­quired the abil­ity to perform echolo­ca­tion (sonar), similar to cetaceans. In 2011 Thaler et al used MRI and PET scans to show that hu­man echolo­ca­tors use di­verse non-au­di­tory brain re­gions to pro­cess echo clicks, pre­dom­i­nantly rely­ing on re-pur­posed ‘vi­sual’ cor­tex.[27]

The echolo­ca­tion study in par­tic­u­lar helps es­tab­lish the case that the brain is ac­tu­ally do­ing global, highly non­lo­cal op­ti­miza­tion—far be­yond sim­ple heb­bian dy­nam­ics. Echolo­ca­tion is an ac­tive sens­ing strat­egy that re­quires very low la­tency pro­cess­ing, in­volv­ing com­plex timed co­or­di­na­tion be­tween a num­ber of mo­tor and sen­sory cir­cuits—all of which must be learned.

Some­how the brain is dy­nam­i­cally learn­ing how to use and as­sem­ble cor­ti­cal mod­ules to im­ple­ment men­tal al­gorithms: ev­ery­day tasks such as vi­sual count­ing, com­par­i­sons of images or sounds, read­ing, etc—all are task which re­quire sim­ple men­tal pro­grams that can shuffle pro­cessed data be­tween mod­ules (some or any of which can also func­tion as short term mem­ory buffers).

To ex­plain this data, we should be on the look­out for a sys­tem in the brain that can learn to con­trol the cor­tex—a gen­eral sys­tem that dy­nam­i­cally routes data be­tween differ­ent brain mod­ules to solve do­main spe­cific tasks.

But first let’s take a step back and start with a high level ar­chi­tec­tural view of the en­tire brain to put ev­ery­thing in per­spec­tive.

Brain Architecture

Below is a cir­cuit di­a­gram for the whole brain. Each of the main sub­sys­tems work to­gether and are best un­der­stood to­gether. You can prob­a­bly get a good high level ex­tremely coarse un­der­stand­ing of the en­tire brain is less than one hour.

(there are a cou­ple of cir­cuit di­a­grams of the whole brain on the web, but this is the best. From this site.)

The hu­man brain has ~100 billion neu­rons and ~100 trillion synapses, but ul­ti­mately it evolved from the bot­tom up—from or­ganisms with just hun­dreds of neu­rons, like the tiny brain of C. Ele­gans.

We know that evolu­tion is code com­plex­ity con­strained: much of the genome codes for cel­lu­lar metabolism, all the other or­gans, and so on. For the brain, most of its bit bud­get needs to be spent on all the com­plex neu­ron, synapse, and even neu­ro­trans­mit­ter level ma­chin­ery—the low level hard­ware foun­da­tion.

For a tiny brain with 1000 neu­rons or less, the genome can di­rectly spec­ify each con­nec­tion. As you scale up to larger brains, evolu­tion needs to cre­ate vastly more cir­cuitry while still us­ing only about the same amount of code/​bits. So in­stead of spec­i­fy­ing con­nec­tivity at the neu­ron layer, the genome codes con­nec­tivity at the mod­ule layer. Each mod­ule can be built from sim­ple pro­ce­du­ral/​frac­tal ex­pan­sion of pro­gen­i­tor cells.

So the size of a mod­ule has lit­tle to noth­ing to do with its in­nate com­plex­ity. The cor­ti­cal mod­ules are huge—V1 alone con­tains 200 mil­lion neu­rons in a hu­man—but there is no rea­son to sus­pect that V1 has greater ini­tial code com­plex­ity than any other brain mod­ule. Big mod­ules are built out of sim­ple pro­ce­du­ral tiling pat­terns.

Very roughly the brain’s main mod­ules can be di­vided into six sub­sys­tems (there are nu­mer­ous smaller sub­sys­tems):

  • The neo­cor­tex: the brain’s pri­mary com­pu­ta­tional workhorse (blue/​pur­ple mod­ules at the top of the di­a­gram). Kind of like a bunch of gen­eral pur­pose FPGA co­pro­ces­sors.

  • The cere­bel­lum: an­other set of co­pro­ces­sors with a sim­pler feed­for­ward ar­chi­tec­ture. Spe­cial­izes more in mo­tor func­tion­al­ity.

  • The tha­la­mus: the orangish mod­ules be­low the cor­tex. Kind of like a re­lay/​rout­ing bus.

  • The hip­pocam­pal com­plex: the apex of the cor­tex, and some­thing like the brain’s database.

  • The amyg­dala and lim­bic re­ward sys­tem: these mod­ules spe­cial­ize in some­thing like the value func­tion.

  • The Basal Gan­glia (green mod­ules): the cen­tral con­trol sys­tem, similar to a CPU.

In the in­ter­est of space/​time I will fo­cus pri­mar­ily on the Basal Gan­glia and will just touch on the other sub­sys­tems very briefly and provide some links to fur­ther read­ing.

The neo­cor­tex has been stud­ied ex­ten­sively and is the main fo­cus of sev­eral pop­u­lar books on the brain. Each neo­cor­ti­cal mod­ule is a 2D ar­ray of neu­rons (tech­ni­cally 2.5D with a depth of about a few dozen neu­rons ar­ranged in about 5 to 6 lay­ers).

Each cor­ti­cal mod­ule is some­thing like a gen­eral pur­pose RNN (re­cur­sive neu­ral net­work) with 2D lo­cal con­nec­tivity. Each neu­ron con­nects to its neigh­bors in the 2D ar­ray. Each mod­ule also has non­lo­cal con­nec­tions to other brain sub­sys­tems and these con­nec­tions fol­low the same lo­cal 2D con­nec­tivity pat­tern, in some cases with some sim­ple af­fine trans­for­ma­tions. Con­volu­tional neu­ral net­works use the same gen­eral ar­chi­tec­ture (but they are typ­i­cally not re­cur­rent.)

Cor­ti­cal mod­ules—like ar­tifi­cal RNNs—are gen­eral pur­pose and can be trained to perform var­i­ous tasks. There are a huge num­ber of mod­els of the cor­tex, vary­ing across the trade­off be­tween biolog­i­cal re­al­ism and prac­ti­cal func­tion­al­ity.

Per­haps sur­pris­ingly, any of a wide va­ri­ety of learn­ing al­gorithms can re­pro­duce cor­ti­cal con­nec­tivity and fea­tures when trained on ap­pro­pri­ate sen­sory data[27]. This is a com­pu­ta­tional proof of the one-learn­ing-al­gorithm hy­poth­e­sis; fur­ther­more it illus­trates the gen­eral idea that data de­ter­mines func­tional struc­ture in any gen­eral learn­ing sys­tem.

There is ev­i­dence that cor­ti­cal mod­ules learn au­to­mat­i­cally (un­su­per­vised) to some de­gree, and there is also some ev­i­dence that cor­ti­cal mod­ules can be trained to re­learn data from other brain sub­sys­tems—namely the hip­pocam­pal com­plex. The dark knowl­edge dis­til­la­tion tech­nique in ANNs[28][29] is a po­ten­tial nat­u­ral ana­log/​model of hip­pocam­pus → cor­tex knowl­edge trans­fer.

Mo­d­ule con­nec­tions are bidi­rec­tional, and feed­back con­nec­tions (from high level mod­ules to low level) out­num­ber for­ward con­nec­tions. We can spec­u­late that some­thing like tar­get prop­a­ga­tion can also be used to guide or con­strain the de­vel­op­ment of cor­ti­cal maps (spec­u­la­tion).

The hip­pocam­pal com­plex is the root or top level of the sen­sory/​mo­tor hi­er­ar­chy. This short youtube video gives a good seven minute overview of the HC. It is like a spa­tiotem­po­ral database. It re­ceives com­pressed scene de­scrip­tor streams from the sen­sory cor­tices, it stores this in­for­ma­tion in medium-term mem­ory, and it sup­ports later auto-as­so­ci­a­tive re­call of these mem­o­ries. Imag­i­na­tion and mem­ory re­call seem to be ba­si­cally the same.

The ‘scene de­scrip­tors’ take the sen­si­ble form of things like 3D po­si­tion and cam­era ori­en­ta­tion, as en­coded in place, grid, and head di­rec­tion cells. This is ba­si­cally the log­i­cal re­sult of com­press­ing the sen­sory stream, com­pa­rable to the net­work­ing data stream in a mul­ti­player video game.

Imag­i­na­tion/​re­call is ba­si­cally just the re­verse of the for­ward sen­sory cod­ing path—in re­verse mode a com­pact scene de­scrip­tor is ex­panded into a full imag­ined scene. Imag­ined/​re­mem­bered scenes ac­ti­vate the same cor­ti­cal sub­net­works that origi­nally formed the mem­ory (or would have if the mem­ory was real, in the case of imag­ined re­call).

The amyg­dala and as­so­ci­ated lim­bic re­ward mod­ules are rather com­plex, but look some­thing like the brain’s ver­sion of the value func­tion for re­in­force­ment learn­ing. Th­ese mod­ules are in­ter­est­ing be­cause they clearly rely on learn­ing, but clearly the brain must spec­ify an ini­tial ver­sion of the value/​util­ity func­tion that has some min­i­mal com­plex­ity.

As an ex­am­ple, con­sider taste. In­fants are born with ba­sic taste de­tec­tors and a very sim­ple ini­tial value func­tion for taste. Over time the brain re­ceives feed­back from di­ges­tion and var­i­ous es­ti­ma­tors of gen­eral mood/​health, and it uses this to re­fine the ini­tial taste value func­tion. Even­tu­ally the adult sense of taste be­comes con­sid­er­ably more com­plex. Ac­quired taste for bit­ter sub­stances—such as coffee and beer—are good ex­am­ples.

The amyg­dala ap­pears to do some­thing similar for emo­tional learn­ing. For ex­am­ple in­fants are born with a sim­ple ver­sions of a fear re­sponse, with is later re­fined through re­in­force­ment learn­ing. The amyg­dala sits on the end of the hip­pocam­pus, and it is also in­volved heav­ily in mem­ory pro­cess­ing.

See also these two videos from khanacademy: one on the lim­bic sys­tem and amyg­dala (10 mins), and an­other on the mid­brain re­ward sys­tem (8 mins)

The Basal Ganglia

The Basal Gan­glia is a wierd look­ing com­plex of struc­tures lo­cated in the cen­ter of the brain. It is a con­served struc­ture found in all ver­te­brates, which sug­gests a core func­tion­al­ity. The BG is prox­i­mal to and con­nects heav­ily with the mid­brain re­ward/​lim­bic sys­tems. It also con­nects to the brain’s var­i­ous mod­ules in the cor­tex/​hip­pocam­pus, tha­la­mus and the cere­bel­lum . . . ba­si­cally ev­ery­thing.

All of these con­nec­tions form re­cur­rent loops be­tween as­so­ci­ated com­part­men­tal mod­ules in each struc­ture: tha­la­mo­cor­ti­cal/​hip­pocam­pal-cere­bel­lar-basal_gan­glial loops.

Just as the cor­tex and hip­pocam­pus are sub­di­vided into mod­ules, there are cor­re­spond­ing mod­u­lar com­part­ments in the tha­la­mus, basal gan­glia, and the cere­bel­lum. The set of mod­ules/​com­part­ments in each main struc­ture are all highly in­ter­con­nected with their cor­re­spon­dents across struc­tures, lead­ing to the con­cept of dis­tributed pro­cess­ing mod­ules.

Each DPM forms a re­cur­rent loop across brain struc­tures (the lo­cal net­works in the cor­tex, BG, and tha­la­mus are also lo­cally re­cur­rent, whereas those in the cere­bel­lum are not). Th­ese re­cur­rent loops are mostly sep­a­rate, but each sub-struc­ture also pro­vides differ­ent op­por­tu­ni­ties for in­ter-loop con­nec­tions.

The BG ap­pears to be in­volved in es­sen­tially all higher cog­ni­tive func­tions. Its core func­tion­al­ity is ac­tion se­lec­tion via sub­net­work switch­ing. In essence ac­tion se­lec­tion is the core prob­lem of in­tel­li­gence, and it is also gen­eral enough to func­tion as the build­ing block of all higher func­tion­al­ity. A sys­tem that can se­lect be­tween mo­tor ac­tions can also se­lect be­tween tasks or sub­goals. More gen­er­ally, low level ac­tion se­lec­tion can eas­ily form the ba­sis of a Tur­ing Ma­chine via se­lec­tive rout­ing: de­cid­ing where to route the out­put of tha­la­mo­cor­ti­cal-cere­bel­lar mod­ules (some of which may spe­cial­ize in short term mem­ory as in the pre­frontal cor­tex, al­though all cor­ti­cal mod­ules have some short term mem­ory ca­pa­bil­ity).

There are now a num­ber of com­pu­ta­tional mod­els for the Basal Gan­glia-Cor­ti­cal sys­tem that demon­strate pos­si­ble biolog­i­cally plau­si­ble im­ple­men­ta­tions of the gen­eral the­ory[28][29]; in­te­gra­tion with the hip­pocam­pal com­plex leads to larger-scale sys­tems which aim to model/​ex­plain most of higher cog­ni­tion in terms of se­quen­tial men­tal pro­grams[30] (of course fully test­ing any such mod­els awaits suffi­cient com­pu­ta­tional power to run very large-scale neu­ral nets).

For an ex­tremely over­sim­plified model of the BG as a dy­namic router, con­sider an ar­ray of N dis­tributed mod­ules con­trol­led by the BG sys­tem. The BG con­trol net­work ex­pands these N in­puts into an NxN ma­trix. There are N2 po­ten­tial in­ter­mod­u­lar con­nec­tions, each of which can be in­di­vi­d­u­ally con­trol­led. The con­trol layer reads a com­pressed, down­sam­pled ver­sion of the mod­ule’s hid­den units as its main in­put, and is also re­cur­rent. Each out­put node in the BG has a mul­ti­plica­tive gat­ing effect which se­lec­tively en­ables/​dis­ables an in­di­vi­d­ual in­ter­mod­u­lar con­nec­tion. If the con­trol layer is naively fully con­nected, this would re­quire (N2)2 con­nec­tions, which is only fea­si­ble for N ~ 100 mod­ules, but sparse con­nec­tivity can sub­stan­tially re­duce those num­bers.

It is un­clear (to me), whether the BG ac­tu­ally im­ple­ments NxN style rout­ing as de­scribed above, or some­thing more like 1xN or Nx1 rout­ing, but there is gen­eral agree­ment that it im­ple­ments cor­ti­cal rout­ing.

Of course in ac­tu­al­ity the BG ar­chi­tec­ture is con­sid­er­ably more com­plex, as it also must im­ple­ment re­in­force­ment learn­ing, and the in­ter­mod­u­lar con­nec­tivity map it­self is also prob­a­bly quite sparse/​com­pressed (the BG may not con­trol all of cor­tex, cer­tainly not at a uniform re­s­olu­tion, and many con­trol­led mod­ules may have a very limited num­ber of al­lowed rout­ing de­ci­sions). Nonethe­less, the sim­ple mul­ti­plica­tive gat­ing model illus­trates the core idea.

This same mul­ti­plica­tive gat­ing mechanism is the core prin­ci­ple be­hind the highly suc­cess­ful LSTM (Long Short-Term Me­mory)[30] units that are used in var­i­ous deep learn­ing sys­tems. The sim­ple ver­sion of the BG’s gat­ing mechanism can be con­sid­ered a wider par­allel and hi­er­ar­chi­cal ex­ten­sion of the ba­sic LSTM ar­chi­tec­ture, where you have a par­allel ar­ray of N mem­ory cells in­stead of 1, and each mem­ory cell is a large vec­tor in­stead of a sin­gle scalar value.

The main ad­van­tage of the BG ar­chi­tec­ture is par­allel hi­er­ar­chi­cal ap­prox­i­mate con­trol: it al­lows a large num­ber of hi­er­ar­chi­cal con­trol loops to up­date and in­fluence each other in par­allel. It also re­duces the huge com­plex­ity of gen­eral rout­ing across the full cor­tex down into a much smaller-scale, more man­age­able rout­ing challenge.

Im­pli­ca­tions for AGI

Th­ese two con­cep­tions of the brain—the uni­ver­sal learn­ing ma­chine hy­poth­e­sis and the evolved mod­u­lar­ity hy­poth­e­sis—lead to very differ­ent pre­dic­tions for the likely route to AGI, the ex­pected differ­ences be­tween AGI and hu­mans, and thus any con­se­quent safety is­sues and strate­gies.

In the ex­treme case imag­ine that the brain is a pure ULM, such that the ge­netic prior in­for­ma­tion is close to zero or is sim­ply unim­por­tant. In this case it is vastly more likely that suc­cess­ful AGI will be built around de­signs very similar to the brain, as the ULM ar­chi­tec­ture in gen­eral is the nat­u­ral ideal, vs the al­ter­na­tive of hav­ing to hand en­g­ineer all of the AI’s var­i­ous cog­ni­tive mechanisms.

In re­al­ity learn­ing is com­pu­ta­tion­ally hard, and any prac­ti­cal gen­eral learn­ing sys­tem de­pends on good pri­ors to con­strain the learn­ing pro­cess (es­sen­tially tak­ing ad­van­tage of pre­vi­ous knowl­edge/​learn­ing). The re­cent and rapid suc­cess of deep learn­ing is strong ev­i­dence for how much prior in­for­ma­tion is ideal: just a lit­tle. The prior in deep learn­ing sys­tems takes the form of a com­pact, small set of hy­per­pa­ram­e­ters that con­trol the learn­ing pro­cess and spec­ify the over­all net­work ar­chi­tec­ture (an ex­tremely com­pressed prior over the net­work topol­ogy and thus the pro­gram space).

The ULH sug­gests that most ev­ery­thing that defines the hu­man mind is cog­ni­tive soft­ware rather than hard­ware: the adult mind (in terms of al­gorith­mic in­for­ma­tion) is 99.999% a cul­tural/​memetic con­struct. Ob­vi­ously there are some im­por­tant ex­cep­tions: in­fants are born with some func­tional but very prim­i­tive sen­sory and mo­tor pro­cess­ing ‘code’. Most of the genome’s com­plex­ity is used to spec­ify the learn­ing ma­chin­ery, and the as­so­ci­ated re­ward cir­cuitry. In­fant emo­tions ap­pear to sim­plify down to a sin­gle axis of happy/​sad; differ­en­ti­a­tion into the more sub­tle vec­tor space of adult emo­tions does not oc­cur un­til later in de­vel­op­ment.

If the mind is soft­ware, and if the brain’s learn­ing ar­chi­tec­ture is already uni­ver­sal, then AGI could—by de­fault—end up with a similar dis­tri­bu­tion over mindspace, sim­ply be­cause it will be built out of similar gen­eral pur­pose learn­ing al­gorithms run­ning over the same gen­eral dataset. We already see ev­i­dence for this trend in the high func­tional similar­ity be­tween the fea­tures learned by some ma­chine learn­ing sys­tems and those found in the cor­tex.

Of course an AGI will have lit­tle need for some spe­cific evolu­tion­ary fea­tures: emo­tions that are sub­con­sciously broad­cast via the fa­cial mus­cles is a quirk un­nec­es­sary for an AGI—but that is a rather spe­cific de­tail.

The key take­way is that the data is what mat­ters—and in the end it is all that mat­ters. Train a uni­ver­sal learner on image data and it just be­comes a vi­sual sys­tem. Train it on speech data and it be­comes a speech rec­og­nizer. Train it on ATARI and it be­comes a lit­tle gamer agent.

Train a uni­ver­sal learner on the real world in some­thing like a hu­man body and you get some­thing like the hu­man mind. Put a ULM in a dolphin’s body and echolo­ca­tion is the nat­u­ral pri­mary sense, put a ULM in a hu­man body with bro­ken vi­sual wiring and you can also get echolo­ca­tion.

Con­trol over train­ing is the most nat­u­ral and straight­for­ward way to con­trol the out­come.

To cre­ate a su­per­hu­man AI driver, you ‘just’ need to cre­ate a re­al­is­tic VR driv­ing sim and then train a ULM in that world (bet­ter train­ing and the sim­ple power of se­lec­tive copy­ing leads to su­per­hu­man driv­ing ca­pa­bil­ity).

So to cre­ate benev­olent AGI, we should think about how to cre­ate vir­tual wor­lds with the right struc­ture, how to ed­u­cate minds in those wor­lds, and how to safely eval­u­ate the re­sults.

One key idea—which I pro­posed five years ago is that the AI should not know it is in a sim.

New AI de­signs (world de­sign + ar­chi­tec­tural pri­ors + train­ing/​ed­u­ca­tion sys­tem) should be tested first in the safest vir­tual wor­lds: which in sim­plifi­ca­tion are sim­ply low tech wor­lds with­out com­puter tech­nol­ogy. De­sign com­bi­na­tions that work well in safe low-tech sand­boxes are pro­moted to less safe high-tech VR wor­lds, and then fi­nally the real world.

A key prin­ci­ple of a se­cure code sand­box is that the code you are test­ing should not be aware that it is in a sand­box. If you vi­o­late this prin­ci­ple then you have already failed. Yud­kowsky’s AI box thought ex­per­i­ment as­sumes the vi­o­la­tion of the sand­box se­cu­rity prin­ci­ple apri­ori and thus is some­thing of a dis­trac­tion. (the vir­tual sand­box idea was most likely dis­cussed el­se­where pre­vi­ously, as Yud­kowsky in­di­rectly cri­tiques a straw­man ver­sion of the idea via this sci-fi story).

The vir­tual sand­box ap­proach also com­bines nicely with in­visi­ble thought mon­i­tors, where the AI’s thoughts are au­to­mat­i­cally dumped to search­able logs.

Of course we will still need a solu­tion to the value learn­ing prob­lem. The nat­u­ral route with brain-in­spired AI is to learn the key ideas be­hind value ac­qui­si­tion in hu­mans to help de­rive an im­proved ver­sion of some­thing like in­verse re­in­force­ment learn­ing and or imi­ta­tion learn­ing[31] - an in­ter­est­ing topic for an­other day.


Ray Kurzweil has been pre­dict­ing for decades that AGI will be built by re­verse en­g­ineer­ing the brain, and this par­tic­u­lar pre­dic­tion is not es­pe­cially unique—this has been a pop­u­lar po­si­tion for quite a while. My own in­ves­ti­ga­tion of neu­ro­science and ma­chine learn­ing led me to a similar con­clu­sion some time ago.

The re­cent progress in deep learn­ing, com­bined with the emerg­ing mod­ern un­der­stand­ing of the brain, provide fur­ther ev­i­dence that AGI could ar­rive around the time when we can build and train ANNs with similar com­pu­ta­tional power as mea­sured very roughly in terms of neu­ron/​synapse counts. In gen­eral the ev­i­dence from the last four years or so sup­ports Han­son’s view­point from the Foom de­bate. More speci­fi­cally, his gen­eral con­clu­sion:

Fu­ture su­per­in­tel­li­gences will ex­ist, but their vast and broad men­tal ca­pac­i­ties will come mainly from vast men­tal con­tent and com­pu­ta­tional re­sources. By com­par­i­son, their gen­eral ar­chi­tec­tural in­no­va­tions will be minor ad­di­tions.

The ULH sup­ports this con­clu­sion.

Cur­rent ANN en­g­ines can already train and run mod­els with around 10 mil­lion neu­rons and 10 billion (com­pressed/​shared) synapses on a sin­gle GPU, which sug­gests that the goal could soon be within the reach of a large or­ga­ni­za­tion. Fur­ther­more, Moore’s Law for GPUs still has some steam left, and soft­ware ad­vances are cur­rently im­prov­ing simu­la­tion perfor­mance at a faster rate than hard­ware. Th­ese trends im­plies that An­thro­po­mor­phic/​Neu­ro­mor­phic AGI could be sur­pris­ingly close, and may ap­pear sud­denly.

What kind of lev­er­age can we ex­ert on a short timescale?