Behavior: The Control of Perception

This is the second of three posts dealing with control theory and Behavior: The Control of Perception by William Powers. The previous post gave an introduction to control theory, in the hopes that a shared language will help communicate the models the book is discussing. This post discusses the model introduced in the book. The next post will provide commentary on the model and what I see as its implications, for both LW and AI.

B:CP was published in 1973 by William Powers, who was a controls engineer before he turned his attention to psychology. Perhaps unsurprisingly, he thought that the best lens for psychology was the one he had been trained in, and several sections of the book contrast his approach with the behaviorist approach. This debate is before my time, and so I find it mostly uninteresting, and will only bring it up when I think the contrast clarifies the difference in methodology, focusing instead on the meat of his model.

The first five chapters of B:CP introduce analog computers and feedback loops, and make the claim that it’s better to model the nervous system as an analog computer, with continuous neural currents (with the strength of the current determined by the rate of the underlying impulses and number of branches, as each impulse has the same strength) rather than as a digital computer. On the macroscale, this seems unobjectionable, and while it makes the model clearer I’m not sure that it’s necessary. He also steps through how to physically instantiate a number of useful mathematical functions with a handful of neurons; in general, I’ll ignore that detailed treatment but you should trust that the book makes a much more detailed argument for the physical plausibility of this model than I do here.

The sixth chapter discusses the idea of hierarchical modeling. We saw a bit of that in the last post, with the example of a satellite that had two control systems, one which used sense data of the thruster impulses and the rotation to determine the inertia model, and the other which uses the rotation and the inertia model to determine the thruster impulses. The key point here is that the models are inherently local, and thus can be separated into units. The first model doesn’t have to know that there’s another feedback loop; it just puts the sense data it receives through a formula, and uses another formula to update its memory, which has the property of reducing the error of its model. Another way to look at this is that control systems are, in some sense, agnostic of what they’re sensing and what they’re doing, and their reference level comes from the environment similar to any other input. While one might see the two satellite systems as not being stacked, when one control circuit has no outputs except to vary the reference levels of other control circuits, it makes sense to see the reference-setting control circuit as superior in the hierarchical organization of the system.

There’s a key insight hidden there which is probably best to show by example. The next five chapters of B:CP step through five levels in the hierarchy. Imagine this section as building a model of a human as a robot- there are output devices (muscles and glands and so on) and input devices (sensory nerve endings) that are connected to the environment. Powers discusses both output and input, but here I’ll just discuss output for brevity’s sake.

Powers calls the first level intensity, and it deals directly with those output and input devices. Consider a muscle; the control loop there might have a reference tension in the muscle that it acts to maintain, and that tension corresponds to the outside environment. From the point of view of measured units, the control loops here convert between some physical quantity and neural current.

Powers calls the second level sensation, and it deals with combinations of first level sensors. As we’ve put all of the actual muscular effort of the arm and hand into the first level, the second level is one level of abstraction up. Powers suggests that the arm and hand have about 27 independent movements, and each movement represents some vector in the many-dimensional space of however many first order control loops there are. (Flexing the wrist, for example, might reflect an increase of effort intensity of muscles in the forearm on one side and a decrease of effort intensity of muscles on the other side.) Note that from this point on, the measured units are all the same—amperes of neural current—and that means we can combine unlike physical things because they have some conceptual similarity. This is the level of clustering where it starts to make sense to talk about a ‘wrist,’ or at least particular joints in it, or something like a ‘lemonade taste,’ which exists as a mental combination of the levels of sugar and acid in a liquid. The value of a hierarchy also starts to become clear- when we want to flex the wrist, we don’t have to calculate what command to send to each individual muscle- we simply set a new reference level for the wrist-controller, and it adjusts the reference levels for many different muscle-controllers.

Powers calls the third level configuration, and it deals with combinations of second level loops. The example here would be the position of the joints in the arm or hand, such as positioning the hand to perform the Vulcan salute.

Powers calls the fourth level transition, and it deals with combinations of third level loops, as well as integration or differentiation of them. A third order control loop might put the mouth and vocal cords where they need to be to make a particular note, and a fourth order control loop could vary the note that third order control loop is trying to hit in order to create a pitch that rises at a certain rate.

Powers calls the fifth level sequence, and it deals with combinations and patterns of the fourth level loops. Here we see patterns of behavior- a higher level can direct a loop at this level to ‘walk’ to a particular place, and then the orders go down until muscles contract at the lowest level.

The key insight is that we can, whenever we identify a layer, see that layer as part of the environment of the hierarchy above that layer. At the 0th layer, we have the universe- at the 1st layer, we have a body, at the 2nd layer, we have a brain (and maybe a spine and other bits of the nervous system). Moving up layers in the organization is like peeling an onion; we consider smaller and smaller portions of the physical brain and consider more and more abstract concepts.

I’m not a neuroscientist, but I believe that until this point Powers’s account would attract little controversy. The actual organization structure of the body is not as cleanly pyramidal as this brief summary makes it sound, but Powers acknowledges as much and the view remains broadly accurate and useful. There’s ample neurological evidence to support that there are parts of the brain that do the particular functions we would expect the various orders of control loops to do, and the interested reader should take a look at the book.

Where Powers’s argument becomes more novel, speculative, and contentious is the that the levels keep going up, with the same basic architecture. Instead of an layered onion body wrapped around an opaque homunculus mind, it’s an onion all the way to the center- which Powers speculates ends at around the 9th level. (More recent work, I believe, estimates closer to 11 levels.) The hierarchy isn’t necessarily neat, with clearly identifiable levels, but there is some sort of hierarchical block diagram that stretches from terminal goals to the environment. He identifies the levels as relationships, algorithms (which he calls program control), principles, and system concepts. As the abstractness of the concepts would suggest, their treatment is more vague and he manages to combine them all into a single chapter, with very little of the empirical justification that filled earlier chapters.

This seems inherently plausible to me:

  1. It’s parsimonious to use the same approach to signal processing everywhere, and it seems easier to just add on another layer of signal processing (which allows more than a linear increase in potential complexity of organism behavior) than to create an entirely new kind of brain structure.

  2. Deep learning and similar approaches in machine learning can fit comparable architecture in an unsupervised fashion. My understanding of the crossover between machine learning and neuroscience is that we understand machine vision the best, and many good algorithms line up with what we see in human brains—pixels get aggregated to make edges which get aggregated to make shapes, and so on up the line. So we see the neural hierarchies this model predicts, but this isn’t too much of a surprise because hierarchies are the easiest structures to detect and interpret.

What is meant by “terminal goals”? Well, control systems have to get their reference from somewhere, and the structure of the brain can’t be “turtles all the way up.” Eventually there should be a measured variable, like “hunger,” which is compared to some reference, and any difference between the variable and the reference leads to action targeted at reducing the difference.

That reference could be genetic/​instinctual, or determined by early experience, or modified by chemicals, or so on, but the point is that it isn’t the feedback of a neural control loop above it. Chapter 14 discusses learning as the reorganization of the control system, and the processes used there seem potentially sufficient to explain where the reference levels and the terminal goals come from.

Indeed, the entire remainder of the book, discussing emotion, conflict, and so on, fleshes out this perspective in a full way that I could only begin to touch on here, so I will simply recommend reading the book if you’re interested in his model. Here’s a sample on conflict:

Conflict is an encounter between two control systems, an encounter of a specific kind. In effect, the two control systems attempt to control the same quantity, but with respect to two different reference levels. For one system to correct an error, the other system must experience error. There is no way for both systems to experience zero error at the same time. Therefore the outputs of the system must act on the shared controlled quantity in opposite directions.
If both systems are reasonably sensitive to error, and the two reference levels are far apart, there will be a range of values of the controlled quantity (between the reference levels) throughout which each system will contain an error signal so large that the output of each system will be solidly at its maximum. These two outputs, if about equal, will cancel, leaving essentially no net output to affect the controlled quantity. Certainly the net output cannot change as the “controlled” quantity changes in this region between the reference levels, since both outputs remain at maximum.
This means there is a range of values over which the controlled quantity cannot be protected against disturbance any more. Any moderate disturbance will change the controlled quantity, and this will change the perceptual signals in the two control systems. As long as neither reference level is closely approached, there will be no reaction to these changes on the part of the conflicted systems.
When a disturbance forces the controlled quantity close enough to either reference level, however, there will be a reaction. The control system experiencing lessened error will relax, unbalancing the net output in the direction of the other reference level. As a result, the conflicted pair of systems will act like a single system having a “virtual reference level,” between the two actual ones. A large dead zone will exist around the virtual reference level, within which there is little or no control.
In terms of real behavior, this model of conflict seems to have the right properties. Consider a person who has two goals: one to be a nice guy, and the other to be a strong, self-sufficient person. If he perceives these two conditions in the “right” way (for conflict) he may find himself wanting to be deferential and pleasant, and at the same time wanting to speak up firmly for his rights. As a result, he does neither. He drifts in a state between, his attitude fluctuating with every change in external circumstances, undirected. When cajoled and coaxed enough he may find himself beginning to warm up, smile, and think of a pleasant remark, but immediately he realizes that he is being manipulated and resentfully breaks off communication or utters a cutting remark. On the other hand if circumstances lead him to begin defending himself against unfair treatment, his first strong words fill him with remorse and he blunts his defense with an apologetic giggle. He can react only when pushed to one extreme or the other, and his reaction takes him back to the uncontrolled middle ground.

So what was that about behaviorism?

According to Powers, most behaviorists thought in terms of ‘stimulus->response,’ where you could model a creature as a lookup table that would respond in a particular way to a particular stimulus. This has some obvious problems—how do we cluster stimuli? Someone saying “I love you” means very different things depending on the context. If the creature has a goal that depends on a relationship between entities, like wanting there to not be an unblocked line between their eyes and the sun, then you need to know the position of the sun to best model their response to any stimulus. Otherwise, if you just record what happens when you move a shade to the left, you’ll notice that sometimes they move left and sometimes they move right. (Consider the difference between 1-place functions and 2-place functions.)

Powers discusses a particular experiment of neural stimulation in cats where the researchers couldn’t easily interpret what some neurons were doing in behaviorist terms, because the cat would inconsistently move one way or another, but the control theory view parsimoniously explained the neurons as being higher order, which meant that the original position had to be taken into account to determine what the error was when the reference was adjusted by electrical stimulation, as it’s the error that determines the response rather than just the reference.

If we want to have a lookup table in which the entire life history of the creature is the input, then figuring out what this table looks like is basically impossible. We want something that’s complex enough to encode realistic behavior without being complex enough to encode unrealistic behavior—that is, we want the structure of our model to match the structure of the actual brain and behavior, and it looks like the control theory view is a strong candidate.

Unfortunately, I’m not an expert in this field, so I can’t tell you what the state of the academic discussion looks like now. I get the impression that a number of psychologists have at least partly bought into the BCP paradigm (called Perceptual Control Theory) and have been working on their interests for decades, but it doesn’t seem to have swept the field. As a general comment, controversies like this are often resolved by synthesis rather than the complete victory of one side over the other. If modern psychologists have learned a bit of the hierarchical control systems viewpoint and avoided the worst silliness of the past, then the historic criticisms are no longer appropriate and most of the low-hanging fruit from adopting this view haven already been picked.

Next: a comparison with utility, discussion of previous discussion on LW, and some thoughts on how thinking about control systems can impact thinking about AI.