Jeff Hawkins on neuromorphic AGI within 20 years

I just listened to AI podcast: Jeff Hawkins on the Thousand Brain Theory of Intelligence, and read some of the related papers. Jeff Hawkins is a theoretical neuroscientist; you may have heard of his 2004 book On Intelligence. Earlier, he had an illustrious career in EECS, including inventing the Palm Pilot. He now runs the company Numenta, which is dedicated to understanding how the human brain works (especially the neocortex), and using that knowledge to develop bio-inspired AI algorithms.

In no particular order, here are some highlights and commentary from the podcast and associated papers.

Every part of the neocortex is running the same algorithm

The neocortex is the outermost and most evolutionarily-recent layer of the mammalian brain. In humans, it is about the size and shape of a dinner napkin (maybe 1500cm²×3mm), and constitutes 75% of the entire brain. Jeff wants us to think of it like 150,000 side-by-side “cortical columns”, each of which is a little 1mm²×3mm tube, although I don’t think we’re supposed to the “column” thing too literally (there’s no sharp demarcation between neighboring columns).

When you look at a diagram of the brain, the neocortex has loads of different parts that do different things—motor, sensory, visual, language, cognition, planning, and more. But Jeff says that all 150,000 of these cortical columns are virtually identical! Not only do they each have the same types of neurons, but they’re laid out into the same configuration and wiring and larger-scale structures. In other words, there seems to be “general-purpose neocortical tissue”, and if you dump visual information into it, it does visual processing, and if you connect it to motor control pathways, it does motor control, etc. He said that this theory originated with Vernon Mountcastle in the 1970s, and is now widely (but not universally) accepted in neuroscience. The theory is supported both by examining different parts of the brain under the microscope, and also by experiments, e.g. the fact that congenitally blind people can use their visual cortex for non-visual things, and conversely he mentioned in passing some old experiment where a scientist attached the optic nerve of a lemur to a different part of the cortex and it was able to see (or something like that).

Anyway, if you accept that premise, then there is one type of computation that the neocortex does, and if we can figure it out, we’ll understand everything from how the brain does visual processing to how Einstein’s brain invented General Relativity.

To me, cortical uniformity seems slightly at odds with the wide variety of instincts we have, like intuitive physics, intuitive biology, language, and so on. Are those not implemented in the neocortex? Are they implemented as connections between (rather than within) cortical columns? Or what? This didn’t come up in the podcast. (ETA: I tried to answer this question in my later post, Human instincts, Symbol grounding, and the blank-slate neocortex.)

(See also previous LW discussion at: The brain as a universal learning machine, 2015)

Grid cells and displacement cells

Background: Grid cells for maps in the hippocampus

Grid cells, discovered in 2005, help animals build mental maps of physical spaces. (Grid cells are just one piece of a complicated machinery, along with “place cells” and other things, more on which shortly.) Grid cells are not traditionally associated with the neocortex, but rather the entorhinal cortex and hippocampus. But Jeff says that there’s some experimental evidence that they’re also in the neocortex, and proposes that this is very important.

What are grid cells? Numenta has an educational video here. Here’s my oversimplified 1D toy example (the modules can also be 2D). I have a cortical column with three “grid cell modules”. One module consists of 9 neurons, one has 10 neurons, and the third has 11. As I stand in a certain position in a room, one neuron from each of the three modules is active—let’s say the active neurons right now are , , and for some integers . When I take a step rightward, are each incremented by 1; when I take a step leftward, they’re each decremented by 1. The three modules together can thus keep track of 990 unique spatial positions (cf. Chinese Remainder Theorem).

With enough grid cell modules of incommensurate size, scale-factor, and (in 2D) rotation, the number of unique representable positions becomes massive, and there is room to have lots of entirely different spaces (each with their own independent reference frame) stored this way without worrying about accidental collisions.

So you enter a new room. Your brain starts by picking a point in the room and assigns it a random (in my toy 1D example), and then stores all the other locations in the room in reference to that. Then you enter a hallway. As you turn your attention to this new space, you pick a new random and build your new hallway spatial map around there. So far so good, but there’s a missing ingredient: the transformation from the room map to the hallway map, especially in their areas of overlap. How does that work? Jeff proposes (in this paper) that there exist what he calls “displacement cells”, which (if I understand it correctly) literally implement modular arithmetic for the grid cell neurons in each grid cell module. So⁠—still in the 1D toy example⁠—the relation between the room map and the hall map might be represented by three displacement cell neurons (one for each of the three grid cell modules), and the neurons are wired up such that the brain can go back and forth between the activations

So if grid cell #2 is active, and then displacement cell #5 turns on, it should activate grid cell #7=5+2. It’s kinda funny, but why not? We just put in a bunch of synapses that hardcode each entry of an addition table⁠—and not even a particularly large one.

(Overall, all the stuff about the detailed mechanisms of grid cells and displacement cells comes across to me as “Ingenious workaround for the limitations of biology”, not “Good idea that AI might want to copy”, but maybe I’m missing something.)

New idea: Grid cells for “maps” of objects and concepts in the neocortex

Anyway, Jeff theorizes that this grid cell machinery is not only used for navigating real spaces in the hippocampus but also navigating concept spaces in the neocortex.

Example #1: A coffee cup. We have a mental map of a coffee cup, and you can move around in that mental space by incrementing and decrementing the (in my 1D toy example).

Example #2: A coffee mug with a picture on it. Now, we have a mental map of the coffee mug, and a separate mental map of the picture, and then a set of displacement cells describe where the picture is in relation to the coffee cup. (This also includes relative rotation and scale, which I guess are also part of this grid cell + displacement cell machinery somehow, but he says he hasn’t worked out all the details.)

Example #3: A stapler, where the two halves move with respect to each other. This motion can be described by a sequence of displacement cells … and conveniently, neurons are excellent at learning temporal sequences (see below).

Example #4: Logarithms. Jeff thinks we have a reference frame for everything! Every word, every idea, every concept, everything you know has its own reference frame, in at least one of your cortical columns and probably thousands of them. Then displacement cells can encode the mathematical transformations of logarithms, and the relations between logarithms and other concepts, or something like that. I tried to sketch out an example of what he might be getting at in the next section below. Still, I found that his discussion of abstract cognition was a bit sketchier and more confusing than other things he talked about. My impression is that this is an aspect of the theory that he’s still working on.

“Thousand brains” theory

(See also Numenta educational video.) His idea here is that every one of the 150,000 “cortical columns” in the brain (see above) has the whole machinery with grid cells and displacement cells, reference frames for gazillions of different objects and concepts, and so on.

A cortical column that gets input from the tip of the finger is storing information and making predictions about what the tip of the finger will feel as it moves around the coffee cup. A cortical column in the visual cortex is storing information and making predictions about what it will see in its model of the coffee cup. And so on. If you reach into a box, and touch it with four fingers, each of those fingers is trying to fit its own data into its own model to learn what the object is, and there’s a “voting” mechanism that allows them to reach agreement on what it is.

So I guess if you’re doing a math problem with a logarithm, and you’re visually imagining the word “log” floating to the other side of the equation and turning into an “exp”, then there’s a cortical column in your visual cortex that “knows” (temporal sequence memory) how this particular mathematical transformation works. Maybe the other cortical columns don’t “know” that that transformation is possible, but can find out the result via the voting mechanism.

Or maybe you’re doing the same math problem, but instead of visualizing the transformation, instead you recite to yourself the poem: “Inverse of log is exp”. Well, then this knowledge is encoded as the temporal sequence memory in some cortical column of your auditory cortex.

There’s a homunculus-esque intuition that all these hundreds of thousands of models need to be brought together into one unified world model. Neuroscientists calls this the “sensor fusion” problem. Jeff denies the whole premise. Thousands of different incomplete world models, plus a voting mechanism, is all you need; there is no unified world model.

Is the separate world model for each cortical column an “Ingenious workaround for the limitations of biology” or a “Good idea that AI should copy”? On the one hand, clearly there’s some map between the concepts in different cortical columns, so that voting can work. That suggests that we can improve on biology by having one unified world model, but with many different coordinate systems and types of sensory prediction associated with each entry. On the other hand, maybe the map between entries of different columns’ world models is not a nice one-to-one map, but rather some fuzzy many-to-many map. Then unifying it into a single ontology might be fundamentally impossible (except trivially, as a disjoint union). I’m not sure. I guess I should look up how the voting mechanism is supposed to work.

Human-level AI, timelines, and existential risk

Jeff’s goal is to “understand intelligence” and then use it to build intelligent machines. He is confident that this is possible, and that the machines can be dramatically smarter than humans (e.g. thinking faster, more memory, better at physics and math, etc.). Jeff thinks the hard part is done—he has the right framework for understanding cortical algorithms, even if there are still some details to be filled in. Thus, Jeff believes that, if he succeeds at proselytizing his understanding of brain algorithms to the AI community (which is why he was doing that podcast), then we should be able to make machines with human-like intelligence in less than 20 years.

Near the end of the podcast, Jeff emphatically denounced the idea of AI existential risk, or more generally that there was any reason to second-guess his mission of getting beyond-human-level intelligence as soon as possible. However, he appears to be profoundly misinformed about both what the arguments are for existential risk and who is making them. Ditto for Lex, the podcast host.

Differences between actual neurons and artifical neural networks (ANNs)

Non-proximal synapses and recognizing time-based patterns

He brought up his paper Why do neurons have thousands of synapses?. Neurons have anywhere from 5 to 30,000 synapses. There are two types. The synapses near the cell body (perhaps a few hundred) can cause the neuron to fire, and these are most similar to the connections in ANNs. The other 95% are way out on a dendrite (neuron branch), too far from the neuron body to make it fire, even if all 95% were activated at once! Instead, what happens is if you have 10-40 of these synapses that all activate at the same time and are all very close to each other on the dendrite, it creates a “dendritic spike” that goes to the cell body and raises the voltage a little bit, but not enough to make the cell fire. And then the voltage goes back down shortly thereafter. What good is that? If the neuron is triggered to fire (due to the first type of synapses, the ones near the cell body), and has already been prepared by a dendritic spike, then it fires slightly sooner, which matters because there are fast inhibitory processes, such that if a neuron fires slightly before its neighbors, it can prevent those neighbors from firing at all.

So, there are dozens to hundreds of different patterns that the neuron can recognize—one for each close-together group of synapses on a dendrite—each of which can cause a dendritic spike. This allows networks of neurons to do sophisticated temporal predictions, he says: “Real neurons in the brain are time-based prediction engines, and there’s no concept of this at all” in ANNs; “I don’t think you can build intelligence without them”.

Another nice thing about this is that a neuron can learn a new pattern by forming a new cluster of synapses out on some dendrite, and it won’t alter the neuron’s other behavior—i.e., it’s an OR gate, so when that particular pattern is not occurring, the neuron behaves exactly as before.

Binary weights, sparse representations

Another difference: “synapses are very unreliable”; you can’t even assign one digit of precision to their connection strength. You have to think of it as almost binary. By contrast, I think most ANN weights are stored with at least ~2 and more often 7 decimal digits of precision.

Related to this, “the brain works on sparse patterns”. He mentioned his paper How do neurons operate on sparse distributed representations? A mathematical theory of sparsity, neurons and active dendrites. He came back to this a couple times. Apparently in the brain, at any given moment, ~2% of neurons are firing. So imagine a little subpopulation of 10,000 neurons, and you’re trying to represent something with a population code of sets of 200 of these neurons. First, there’s an enormous space of possibilities (). Second, if you pick two random sets-of-200, their overlap is almost always just a few. Even if you pick millions of sets, there won’t be any pair that significantly overlaps. Therefore a neuron can “listen” for, say, 15 of the 200 neurons comprising X, and if those 15 all fire at once, that must have been X. The low overlap between different sets also gives the system robustness, for example to neuron death. Based on these ideas, they recently published this paper advocating for sparseness in image classifier networks, which sounds to me like they’re reinventing neural network pruning, but maybe it’s slightly different, or at least better motivated.

Learning and synaptogenesis

According to Jeff, the brain does not learn by changing the strength of synapses, but rather by forming new synapses (synaptogenesis). Synaptogenesis takes about an hour. How does short-term memory work faster than that? There’s something called “silent synapses”, which are synapses that don’t release neurotransmitters. Jeff’s (unproven) theory is that short-term memory entails the conversion of silent synapses into active synapses, and that this occurs near-instantaneously.

Vision processing

His most recent paper has this image of image processing in the visual cortex:

image text

As I understand it, the idea is that every part of the field of view is trying to fit what it’s looking at into its own world model. In other words, when you look at a cup, you shouldn’t be thinking that the left, center, and right parts of the field-of-view are combined together and then the whole thing is recognized as a coffee cup, but rather that the left part of the field-of-view figures out that it’s looking at the left side of the coffee cup, the center part of the field-of-view figures out that it’s looking at the center of the coffee cup, and the right part of the field-of-view figures out that it’s looking at the right side of the coffee cup. This process is facilitated by information exchange between different parts of the field-of-view, as well as integrating the information that a single cortical column sees over time as the eye or coffee cup moves. As evidence, they note that there are loads of connections in the visual cortex that are non-hierarchical (green arrows). Meanwhile, the different visual areas (V1, V2, etc.) are supposed to operate on different spatial scales, such that a faraway cup of coffee (taking up a tiny section of your field-of-view) might be recognized mainly in V1, while a close-up cup of coffee (taking up a larger chunk of your field-of-view) might be recognized mainly in V4, or something like that.

Maybe this has some profound implications for building CNN image classifiers, but I haven’t figured out what exactly they would be, other than “Maybe try putting in a bunch of recurrent, non-hierarchical, and/​or feedback connections?”

My conclusions for AGI safety

Jeff’s proud pursuit of superintelligence-as-fast-as-possible is a nice reminder that, despite the mainstreaming of AGI safety over the past few years, there’s still a lot more advocacy and outreach work to be done. Again, I’m concerned not so much about the fact that he disagrees with arguments for AGI existential risks, but rather that he (apparently) has never even heard the arguments for AGI existential risks, at least not from any source capable of explaining them correctly.

As for paths and timelines: I’m not in a great position to judge whether Jeff is on the right track, and there are way too many people who claim to understand the secrets of the brain for me to put a lot of weight on any one of them being profoundly correct. Still, I was pretty impressed, and I’m updating slightly in favor of neuromorphic AGI happening soon, particularly because of his claim that the whole neocortex is more-or-less cytoarchitecturally uniform.

Finally, maybe the most useful thing I got out of this is fleshing out my thinking about what an AGI’s world model might look like.

Jeff is proposing that our human brain’s world models are ridiculously profligate in the number of primitive entries included. Our world models don’t just have one entry for “shirt”, but rather separate entries for wet shirt, folded shirt, shirt-on-ironing-board, shirt-on-floor, shirt-on-our-body, shirt-on-someone-else’s-body, etc. etc. etc. After all, each of those things is associated with a different suite of sensory predictions! In fact, it’s even more profligate than that: Really, there might be an entry for “shirt on the floor, as it feels to the center part of my left insole when I step on it”, and an entry for “my yellow T-shirt on the floor, as it appears to the rod cells in my right eye’s upper peripheral vision”. Likewise, instead of one entry for the word “shirt”, there are thousands of them in the various columns of the auditory cortex (for the spoken word), and thousands more in the columns of the visual cortex (for the written word). To the extent that there’s any generic abstract concept of “shirt” in the human brain, it would probably be some meta-level web of learned connections and associations and transformations between all these different entries.

If we build an AI which, like the human brain, has literally trillions of primitive elements in its world model, it seems hopeless to try to peer inside and interpret what it’s thinking. But maybe it’s not so bad? Let’s say some part of cortical column #127360 has 2000 active neurons at some moment. We can break that down into 10 simultaneous active concepts (implemented as sparse population codes of 200 neurons each), and then for each of those 10, we can look back at the record of what was going on the first time that code ever appeared. We can look at the connections between that code and columns of the language center, and write down all those words. We can look at the connections between that code and columns of the visual cortex, and display all those images. Probably we can figure out more-or-less what that code is referring to, right? But it might take 1000 person-years to interpret one second of thought by a human-brain-like AGI! (...Unless we have access to an army of AI helpers, says the disembodied voice of Paul Christiano....) Also, some entries of the world model might be just plain illegible despite our best efforts, e.g. the various neural codes active in Ed Witten’s brain when he thinks about theoretical physics.