Connectionism: Modeling the mind with neural networks

For about a century, people have known that the brain is made up of neurons which connect to each another and perform computations through electrochemical transmission. For about half a century, people have known enough about computers to realize that the brain doesn’t look much like one but still computes pretty well regardless. How?

Spreading Activation was one of the first models of mental computation. In this theory, you can imagine the brain as a bunch of nodes in a graph with labels like “Warlord” “Mongolia” “Barbarian”, “Genghis Khan” and “Salmon”. Each node has certain connections to the others; when they get activated around the same time, it strengthens the connection. When someone asks a question like “Who was that barbaric Mongol warlord, again?” it activates the nodes “warlord”, “barbarian”, and “Mongol”. The activation spreads to all the nodes connected to these, activating them too, and the most strongly activated node will be the one that’s closely connected to all three—the barbaric Mongol warlord in question, Genghis Khan. All the while, “salmon”, which has no connection to any of these concepts, just sits on its own not being activated. This fits with experience, in which if someone asks us about barbaric Mongol warlords, the name “Genghis Khan” pops into our brain like magic, while we continue to not think about salmon if we weren’t thinking about them before.

Bark leash bone wag puppy fetch. If the word “dog” is now running through your head, you may be a victim of spreading activation, as were participants in something called a Deese-Roediger-McDermott experiment, who when asked to quickly memorize a list of words like those and then test their retention several minutes later, were more likely to “remember” “dog” than any of the words actually on the list.

So this does seem attractive, and it does avoid the folk psychology concept of a “belief”. The spreading activation network above was able to successfully answer a question without any representation of propositional statements like “Genghis Khan was a barbaric Mongol warlord.” And one could get really enthusiastic about this and try to apply it to motivation. Maybe we have nodes like “Hunger”, “Food”, “McDonalds”, and “*GET IN CAR, DRIVE TO MCDONALDS*”. The stomach could send a burst of activation to “Hunger”, which in turn activates the closely related “Food”, which in turn activates the closely related “McDonalds”, which in turn activates the closely related “*GET IN CAR, DRIVE TO MCDONALDS*”, and then before you know it you’re ordering a Big Mac.

But when you try to implement this on a computer, you don’t get very far. Although it can perform certain very basic computations, it has trouble correcting itself, handling anything too complicated (the question “name one person who is *not* a barbaric Mongol warlord” would still return “Genghis Khan” on our toy spreading activation network), or making good choices (you can convince the toy network McDonalds is your best dining choice just by saying its name a lot; the network doesn’t care about food quality, prices, or anything else.)

This simple spreading activation model also crashes up against modern neuroscience research, which mostly contradicts the idea of a “grandmother cell”, ie a single neuron that represents a single concept like your grandmother. Mysteriously, all concepts seem to be represented everywhere at once—Karl Lashley found you can remove any part of a rat’s cortex without significantly damaging a specific memory, proving the memory was nonlocalized. How can this be?

Computer research into neural nets developed a model that could answer these and other objects, transforming the immature spreading activation model into full-blown connectionism.

CONNECTIONISM

Connectionism is what happens when you try to implement associationism on a computer and find out it’s a lot weirder than you thought.

Take a bunch of miniprocessors called “units” and connect them to each other with unidirectional links. Call some units “inputs” and others “outputs”. Decide what you want to do with them: maybe learn to distinguish chairs from non-chairs.

Each unit computes a single value representing its “activity level”; each link has a “strength” with which it links its origin unit to its destination unit. When a unit is “activated” (gets an activity level > 0), it sends that activation along all of its outgoing links. If it has an activation level of .5, and two outgoing links, one to A with strength .33 and one to B with strength -.5, then it sends .165 activation to unit A and -.25 activation to unit B. A and B might also be getting lots of activation from other units they’re connected to.

Name your two output units “CHAIR” and “NOT A CHAIR”. Connect your many input units to sense-data about the objects you want to classify as chairs or non-chairs; each one could be the luminosity of a pixel in an image of the object, or you could be kind to it and feed it pre-processed input like “IS MADE OF WOOD” and “IS SENTIENT”.

Suppose we decide to start with a nice wooden chair. The IS MADE OF WOOD node lights up to its maximum value of 1: it’s definitely made of wood! The IS SENTIENT node stays dark; it’s definitely not sentient. And then...nothing happens, because we forgot to set the link strengths to anything other than 0. IS MADE OF WOOD is sending activation all over, but it’s getting multiplied by zero and everything else stays dark.

We now need an program to train the neural net (or a very dedicated human with lots of free time). The training program knows that the correct answer should have been CHAIR, and so the node we designated “CHAIR” should have lit up. It uses one of several algorithms to change the strengths of the links in such a way that next time the nodes that have currently lit up light up, CHAIR will also light up. For example, it might change the link from IS MADE OF WOOD to CHAIR to .3 (why doesn’t it change it all the way to its maximum value? Because that erases all previous data and reduces the system’s entire intelligence to what it learned on just this case).

On the other hand, IS SENTIENT is dark, so the training program might infer that IS SENTIENT is not a characteristic of chairs, and change the link strength there accordingly.

The next time the program sees a picture of a wooden chair, IS MADE OF WOOD will light up, and it will send its activation to IS CHAIR, making IS CHAIR light up with .3 units of activation: the program has a weak suspicion that the picture is a chair.

This is a pretty boring neural network, but if we add several hundred input nodes with all conceivable properties relevant to chairhood and spend a lot of computing power, eventually the program will become pretty good at recognizing chairs from nonchairs, and “learn” complicated rules that a three-legged wooden object is a stool which sort of counts as a chair, but a three legged sentient being is an injured dog and sitting on it will only make it angry.

Larger and more complicated neural nets contain “hidden nodes”—the equivalent of interneurons which sit between the input and the output and exist only to perform computations; feedback from an output node to a previous node that can create stable circles of activation, and other complications. They can perform much more difficult classification problems—identifying words from speech, or people from a photograph.

This is interesting because it solves a problem that baffled philosophers for millennia: the difficulty of coming up with good boundaries for categories. Plato famously defined Man as “a featherless biped”; Diogenes famously responded by presenting him with a shaved chicken. There seem to be many soft constraints on humans (can use language, have two legs, have a heartbeat) but there are also examples of humans who violate these constraints (babies, amputees, Dick Cheney) yet still seem obviously human.

Classical computers get bogged down in these problems, but neural nets naturally reason with “cluster structures in thing-space” and are expert classifiers in the same way we ourselves are.

SIMILARITIES BETWEEN NETS AND BRAINS


Even aside from their skill at classifying and pattern-matching, connectionist networks share many properties with brains, such as:

- Obvious structural similarities: neural nets work by lots of units which activate with different strengths and then spread that activation through links; the brain works by lots of neurons which fire at different rates and then spread that activation through axons.

- Lack of a “grandmother cell”. A classical computer sticks each bit of memory in a particular location. A neural net stores memories as patterns of activation across all units in the network. In a feedback network, specific oft-repeated patterns can form attractor states to which the network naturally tends if pushed anywhere in the region. Association between one idea and another is not through physical contiguity, but through similarities in the pattern. “Grandmother” probably has most of the same neurons in the same state as “grandfather”, and so it takes only a tiny stimulus to push the net from one attractor state to the other.

- Graceful failure: Classical computer programs do not fail gracefully; flip one bit, and the whole thing blows up and you have to spend the rest of your day messing around with a debugger. Destroying a few units in a neural net may only cost it a little bit of its processing power. This matches with the brain: losing a couple of neurons may make you think less clearly; losing a lot of neurons may give you dementia, memory loss and poor judgment. But there’s no one neuron without which you just sit there near-catatonic, chanting “ERROR: NEURON 10559020481 NOT RESPONDING.” And Karl Lashley can take out any part of a rat’s cortex without affecting its memories too much.

- Remembering and forgetting: Neural nets can form memories, and the more the stimulus recurs to them the better they will remember it. But the longer they go without considering the stimulus, the more likely it is that the units involved in the memory-pattern will strengthen other connections, and then it will be harder to get them back in the memory pattern. This is much closer to how humans treat memory than the pristine, eternal encoding of classical computers.

- Ability to quickly locate solutions that best satisfy many soft constraints. What’s a good place for dinner that’s not too expensive, not more than twenty minutes away, serves decent cocktails, and has burgers for the kids? A classical computer would have to first identify the solution class as “restaurants”, then search every restaurant it knows to see if they match each constraint, then fail to return an answer if no such restaurant exists. A neural net will just *settle* on the best answer, and if the cocktails there aren’t really that good, it’ll just settle but give the answer a lower strength.

- Context-sensitivity. Gold silver copper iron tin, and now when I say “lead”, you’re thinking of Element 82 (Pb), even though without the context a more natural interpretation is of the “leadership” variety. Currently active units can force others into a different pattern, giving context sensitivity not only to semantic priming as in the above example, but to emotions (people’s thoughts follow different patterns when they’re happy or sad), situations, and people.

Neural nets have also been used to simulate the results of many popular psychological experiments, including different types of priming, cognitive dissonance, and several of the biases and heuristics.

CONNECTIONISM AND REINFORCEMENT LEARNING

The link between connectionism and associationism is pretty obvious, but the link between connectionism and behaviorism is more elegant.

In most artificial neural nets, you need a training program to teach the net whether it’s right or wrong and which way to adjust the weights. Brains don’t have that luxury. Instead, part of their training algorithm for cognitive tasks is based on surprise: if you did not expect the sun to rise today, and you saw it rise anyway, you should probably decrease the strength of whatever links led you to that conclusion, and increase the strengths of any links that would have correctly predicted the sunrise.

Motivational links, however, could be modified by reinforcement. If a certain action leads to reward, strengthen the links that led to that action; if it leads to punishment, strengthen the links that would have made you avoid that action.

This explains behaviorist principles as a simple case of connectionism, the one where all the links are nice and straight, and you just have to worry about motivation and not about where cognition is coming from. Many of the animals typically studied by behaviorists were simple enough that this simple case was sufficient.

Although I think connectionism is our best current theory for how the mind works at a low level, it’s hard to theorize about just because the networks are so complicated and so hard to simplify. Behaviorism is useful because it reduces the complexity of the networks to a few comprehensible rules, which allow higher level psychological theories and therapies to be derived from them.