Modularity and Buzzy

This is the second part in a mini-sequence presenting material from Robert Kurzban’s excellent book Why Everyone (Else) Is a Hypocrite: Evolution and the Modular Mind.

Chapter 2: Evolution and the Fragmented Brain. Braitenberg’s Vehicles are thought experiments that use Matchbox car-like vehicles. A simple one might have a sensor that made the car drive away from heat. A more complex one has four sensors: one for light, one for temperature, one for organic material, and one for oxygen. This can already cause some complex behaviors: ”It dislikes high temperature, turns away from hot places, and at the same time seems to dislike light bulbs with even greater passion, since it turns toward them and destroys them.” Adding simple modules specialized for different tasks, such as avoiding high temperatures, can make the overall behavior increasingly complex as the modules’ influences interact.

A ”module”, in the context of the book, is an information-processing mechanism specialized for some function. It’s comparable to subroutine in a computer program, operating relatively independently of other parts of the code. There’s a strong reason to believe that human brains are composed of a large number of modules, for specialization yields efficiency.

Consider a hammer or screwdriver. Both tools have very specific shapes, for they’ve been designed to manipulate objects of a certain shape in a specific way. If they were of a different shape, they’d work worse for the purpose they were intended for. Workers will do better if they have both hammers and screwdrivers in their toolbox, instead of one ”general” tool meant to perform both functions. Likewise, a toaster is specialized for toasting bread, with slots just large enough for the bread to fit in, but small enough to efficiently deliver the heat to both sides of the bread. You could toast bread with a butane torch, but it would be hard to toast it evenly – assuming you didn’t just immolate the bread. The toaster ”assumes” many things about the problem it has to solve – the shape of the bread, the amount of time the toast needs to be heated, that the socket it’s plugged into will deliver the right kind of power, and so on. You could use the toaster as a paperweight or a weapon, but not being specialized for those tasks, it would do poorly at it.

To the extent that there is a problem with regularities, an efficient solution to the problem will embody those regularities. This is true for both physical objects and computational ones. Microsoft Word is worse for writing code than a dedicated programming environment, which has all kinds of specialized tools for the task of writing, running and debugging code.

Computer scientists know that the way to write code is by breaking it down to smaller, more narrowly defined problems, which are then solved by their own subroutines. The more one can assume about the problem to be solved, such as the format it’s represented in, the easier it is to write a subroutine for it.

The idea that specialization produces efficiency is uncontroversial in many fields. Spiders are born with specialized behavioral programs for building all kinds of different webs. Virginia opossums know how to play dead in order to make predators lose interest. Human hearts are specialized for pumping blood, while livers are specialized for filtering it; neither would do well at the opposite task. Nerve cells process information well, while fat cells store energy well. In economics, the principle of comparative advantage says it’s better for a country to specialize in the products it’s best at producing. Vision researchers have found many specialized components in human vision, such as ones tasked with detecting edges in the visual field at particular orientations.

The virtues of specialization are uncontroversial within cell physiology, animal physiology, animal behavior, human physiology, economics and computer science – but less so within psychology. Yet even for human behavior, evolution is always expected to select the best possible mechanism for doing the tasks the organism is faced with, and you get the best results with specialization.

A few words are in order about ”general-purpose” objects. Kurzban has been collecting various ”general-purpose” objects, with his current favorite being the bubble sheet given to students for their exams. At the top of the form is written ”General Purpose”.

I love this because it’s ‘general purpose’ as long as your ‘general’ purpose is to record the answers of students on a multiple choice exam to be read by a special machine that generates a computer file of their answers and the number they answered correctly...

There also exist ”general purpose” cleansers, scanners, screwdrivers, calculators, filters, flour, prepaid credit cards, lenses, fertilizers, light bulbs… all of which have relatively narrow functions, though that doesn’t mean they couldn’t do a great deal. Google has a specific function – searching for text – but it can do so on roughly the whole Internet.

People defending the view that the mind has general rather than specialized devices tend to focus on things like learning, and say things like ”The immune system … contains a broad learning system … An alternative would be to have specialized immune modules for different diseases...” But this confuses specialization for things with specialization for function. Even though the immune system is capable of learning, it is still specialized for defending the body against harmful pathogens. In AI, even a ”general-purpose” inference engine, capable of learning rules and regularities in statements of predicate logic, would still have a specialized function: finding patterns in statements that were presented to it in the form of sentences in predicate logic.

There are no general-function artifacts, organs, or circuits in the brain because the concept makes no sense. In the same way that if someone told you to manufacture a tool to ”do useful things,” or write a subroutine to ”do something useful with information,” you would have to narrow down the problem considerably before you got started. In the same way, natural selection can’t build brains that ”learn stuff and compute useful information”. It is necessary to get considerably more specific.

Having established that the brain is likely composed of a number of modules, let’s discuss a related issue: that any specialized computational mechanism – any module – may or may not be connected up to any other module.

Going back to Braitenberg’s Vehicles, suppose a heat sensor tells the Vehicle to drive backwards, while a light sensor tells it to drive forwards. You could solve the issue by letting the sensors affect the wheels by a varying amount, depending on how close to something the Vehicle was. If the heat sensor said ”speed 2 backwards” and the light sensor said ”speed 5 forwards”, the Vehicle would go forward with speed 3 (five minus two). Alternatively, you could make a connection between the two sensors, so that whenever the light sensor was active, it would temporarily shut down the heat sensor. But then whenever you added a new sensory, you’d have to add connections to all the already existing ones, which would quickly get out of hand. Clearly, for complicated organisms, modules should only be directly connected if there’s a clear need for it. For biological organisms, if there isn’t a clear selection pressure to build a connection, then we shouldn’t expect one to exist.

And not every module in humans seems to be connected with all the others, either. Yvain just recently gave us a list of many failures of introspection, one of which is discussed in the book: people shown four identical pairs of panty hose consistently chose the one all the way to the right. Asked for why they chose that one in particular, they gave explanations such as the color or texture of the panty hose, even though they were all identical.

The claim is that the unnatural separation in split-brain patients is exactly analogous to natural separations in normal brains. The modules explaining the decision have little or no access to the modules that generated the decision.

More fundamentally, if the brain consists of a large number of specialized modules, then information in any one of them might or might not be transmitted to any other module. This crucial insight is the origin of the claim that your brain can represent mutually inconsistent things at the same time. As long as information is ”walled off”, many, many contradictions can be maintained within one head.

Chapter 3: Who is ”I”?Cranium Command” is a former attraction in Walt Disney World. The premise is that inside each human brain is a command center, led by a Cranium Commando. In the attraction, you take the role of Buzzy, a Cranium Commando in the head of Bobby, a twelve-year-old boy. Buzzy is surrounded by large screens and readout displays. He gets information from various parts of the brain and different organs, represented by various characters. Buzzy sees and hears what Bobby sees and hears, as well as getting reports from all of Bobby’s organs. In reponse, Buzzy gives various commands, and scripts the words that Bobby will speak.

Cranium Command does get some things right, in that it divides the brain into different functional parts. But this is obviously not how real brains work. For one, if they worked this way, it’d mean there was another tiny commando inside Buzzy’s brain, and another inside that one, and so on. A part of a brain can’t be a whole brain.

Buzzy is reminiscent of what Daniel Dennett calls the Cartesian Theater. It’s the intuition that there’s someone—a ”me”—inside the brain, watching what the eyes see and hearing what the ears hear. Although many people understand on one level that this is false, the intuition of a special observer keeps reasserting itself in various guises. As the philosopher Jerry Fodor writes: ”If… there is a community of computers living in my head, there had also better be somebody who is in charge; and, by God, it had better be me.

One intuition says that it is the conscious modules that are ”us”. The interpretations of the work of Benjamin Libet provide a good example of this. Libet measured the brain activity of his test subjects, and told them to perform a simple wrist movement at a moment of their choosing. Libet found that brain activity preceed the subjects’ reports of their wish to move their wrist. These results, and their later replications, got a lot of publicity. Libet wrote, ”in the traditional view of conscious will and free will, one would expect conscious will to appear before, or at the onset, of [brain activity]”. A 2008 headline in Wired, discussing a study similar to Libet’s, read: ”Brain Scanners Can See Your Decisions Before You Make Them.”

Now one might ask – why is this surprising? Consider the act of reading. While you read these words, several processes take place before the content of the text reaches your conscious awareness.

For example, you don’t know how to identify the letters on the page; this job is done by ”low-level” modules, and you don’t have any experience of how they work. You can think of vision as a modular cascade, with many different systems interacting with one another, building up the percept that is experienced. We have awareness of only the last step in this complex process. Most of the modules in vision are nonconscious, giving rise, eventually, to the conscious experience of seeing.

So, when you’re going to move your hand, there are a number of modules involved, and some module has to make the initial decision in this cascade. It seems to me that there are really only two possibilites. One possibility is that the very first computation in the very first module that starts the string is one of the operations that’s conscious. In this case, the conscious experience of the decision and the brain activity will be at the same time. The only other possibility is that in the long string of operations that occur, from the initiation of the decision to move the wrist to the eventual movement of the wrist, some operation other than the very first one is associated with consciousness.

Libet says that in ”the traditional view of conscious will”, conscious will would appear at the onset or before brain activity. But “before” is impossible. The module that’s making the decision to move the wrist is a part of the brain, and it has to have some physical existence. There’s just no way that the conscious decision could come before the brain activity.

Neither should it be surprising that our conscious decision comes after the initial brain activity. It would, in principle, be possible that the very first little module that initiated the decision-making process would be one of the few modules associated with conscious awareness. But if conscious modules are just one type of module among many, then there is nothing particularly surprising in the finding that a non-conscious module is the one inititating the process. Neither, for that matter, is it surprising that the first module to initiate the flick of the wrist doesn’t happen to be one of the ones associated with vision, or with regulating our heartbeat. Why should it be?

So there are many modules in your brain, some of them conscious, some of them not. Many of the nonconscious ones are very important, processing information about the sensory world, making decisions about action, and so on.

If that’s right, it seems funny to refer to any particular module or set of modules as more ”you” than any other set. Modules have functions, and they do their jobs, and they interact with other modules in your head. There’s no Buzzy in there, no little brain running the show, just different bits with different roles to play.

What I take from this – and I know that not everyone will agree – is that talking about the ”self” is problematic. Which bits, which modules, get to be called ”me?” Why some but not others? Should we take the conscious ones to be special in some way? If so, why? [...]

There’s no doubt that parts of your brain cause your muscles to move, including the very important muscles that push air out of your lungs past your vocal cords, lips, and tongue to make the noises that we call language. Some part of the brain does that. Sure.

But let’s be clear. Whatever is doing that is some part of your brain, and it seems reasonable to ask if there’s anything special about it. Those modules, the ones that make noises with your lungs, might be ”in charge” in some sense, but, then again, maybe they’re not. It’s easy to get stuck on the notion that we should think about these conscious systems as being special in some way. In the end, if it’s true that your brain consists of many, many little modules with various functions, and if only a small number of them are conscious, then there might not be any particular reason to consider some of them to be ”you” or ”really you” or your ”self” or maybe anything else particularly special.