The Comedy of Behaviorism

Followup to: Humans in Funny Suits

“Let me see if I understand your thesis. You think we shouldn’t anthropomorphize people?
-- Sidney Morgenbesser to B. F. Skinner

Behaviorism was the doctrine that it was unscientific for a psychologist to ascribe emotions, beliefs, thoughts, to a human being. After all, you can’t directly observe anger or an intention to hit someone. You can only observe the punch. You may hear someone say “I’m angry!” but that’s hearing a verbal behavior, not seeing anger. Thoughts are not observable, therefore they are unscientific, therefore they do not exist. Oh, you think you’re thinking, but that’s just a delusion—or it would be, if there were such things as delusions.

This was before the day of computation, before the concept of information processing, before the “cognitive revolution” that led to the modern era. If you looked around for an alternative to behaviorism, you didn’t find, “We’re going to figure out how the mind processes information by scanning the brain, reading neurons, and performing experiments that test our hypotheses about specific algorithms.” You found Freudian psychoanalysis. This may make behaviorism a bit more sympathetic.

Part of the origin of behaviorism, was in a backlash against substance dualism—the idea that mind is a separate substance from ordinary physical phenomena. Today we would say, “The apparent specialness comes from the brain doing information-processing; a physical deed to be sure, but one of a different style from smashing a brick with a sledgehammer.” The behaviorists said, “There is no mind.” (John B. Watson, founder of behaviorism, in 1928.)

The behaviorists outlawed not just dualistic mind-substance, but any such things as emotions and beliefs and intentions (unless defined in terms of outward behavior). After all, science had previously done away with angels, and de-gnomed the haunted mine. Clearly the method of reason, then, was to say that things didn’t exist. Having thus fathomed the secret of science, the behaviorists proceeded to apply it to the mind.

You might be tempted to say, “What fools! Obviously, the mind, like the rainbow, does exist; it is to be explained, not explained away. Saying ‘the subject is angry’ helps me predict the subject; the belief pays its rent. The hypothesis of anger is no different from any other scientific hypothesis.”

That’s mostly right, but not that final sentence. “The subject is angry, even though I can’t read his mind” is not quite the same sort of hypothesis as “this hydrogen atom contains an electron, even though I can’t see it with my naked eyes”.

Let’s say that I have a confederate punch the research subject in the nose. The research subject punches the confederate back.

The behaviorist says, “Clearly, the subject has been previously conditioned to punch whoever punches him.”

But now let’s say that the subject’s hands are tied behind his back, so that he can’t return the punch. On the hypothesis that the subject becomes angry, and wants to hurt the other person, we might predict that the subject will take any of many possible avenues to revenge—a kick, a trip, a bite, a phone call two months later that leads the confederate’s wife and girlfriend to the same hotel… All of these I can predict by saying, “The subject is angry, and wants revenge.” Even if I offer the subject a new sort of revenge that the subject has never seen before.

You can’t account for that by Pavlovian reflex conditioning, without hypothesizing internal states of mind.

And yet—what is “anger”? How do you know what is the “angry” reaction? How do you know what tends to cause “anger”? You’re getting good predictions of the subject, but how?

By empathic inference: by configuring your own brain in a similar state to the brain that you want to predict (in a controlled sort of way that doesn’t lead you to actually hit anyone). This may yield good predictions, but that’s not the same as understanding. You can predict angry people by using your own brain in empathy mode. But could you write an angry computer program? You don’t know how your brain is making the successful predictions. You can’t print out a diagram of the neural circuitry involved. You can’t formalize the hypothesis; you can’t make a well-understood physical system that predicts without human intervention; you can’t derive the exact predictions of the model; you can’t say what you know.

In modern cognitive psychology, there are standard ways of handling this kind of problem in a “scientific” way. One panel of reviewers rates how much a given stimulus is likely to make a subject “angry”, and a second independent panel of reviewers rate how much a given response is “angry”; neither being told the purpose of the experiment. This is designed to prevent self-favoring judgments of whether the experimental hypothesis has been confirmed. But it doesn’t get you closer to opening the opaque box of anger.

Can you really call a hypothesis like that a model? Is it really scientific? Is it even Bayesian—can you talk about it in terms of probability theory?

The less radical behaviorists did not say that the mind unexisted, only that no scientist should ever talk about the mind. Suppose we now allow all algorithmic hypotheses about the mind, where the hypothesis is framed in terms that can be calculated on a modern computer, so that experimental predictions can be formally made and observationally confirmed. This gets you a large swathe of modern cognitive science, but not the whole thing. Is the rest witchcraft?

I would say “no”. In terms of probability theory, I would see “the subject is angry” as a hypothesis relating the output of two black boxes, one of which happens to be located inside your own brain. You’re supposing that the subject, whatever they do next, will do something similar to this ‘anger’ black box. This ‘anger’ box happens to be located inside you, but is nonetheless opaque, and yet still seems to have a strong, observable correspondence to the other ‘anger’ box. If two black boxes often have the same output, this is an observable thing; it can be described by probability theory.

From the perspective of scientific procedure, there are many ‘anger’ boxes scattered around, so we use other ‘anger’ boxes instead of the experimenter’s. And since all the black boxes are noisy and have poorly controlled environments, we use multiple ‘anger’ boxes in calculating our theoretical predictions, and more ‘anger’ boxes to gather our experimental results. That may not be as precise as a voltmeter, but it’s good enough to do repeatable experimental science.

(Over on the Artificial Intelligence side of things, though, any concept you can’t compute is magic. At best, it’s a placeholder for your speculations, a space where you’ll put a real theory later. Marcello and I adopted the rule of explicitly saying ‘magical’ to describe any cognitive operation that we didn’t know exactly how to compute.)

Oh, and by the way, I suspect someone will say: “But you can account for complex revenges using behaviorism: you just say the subject is conditioned to take revenge when punched!” Unless you can calculate complex revenges with a computer program, you are using your own mind to determine what constitutes a “complex revenge” or not. Using the word “conditioned” just disguises the empathic black box—the empathic black box was contained in the concept of revenge, that you can recognize, but which you could not write a program to recognize.

So empathic cognitive hypotheses, as opposed to algorithmic cognitive hypotheses, are indeed special. They require special handling in experimental procedure; they cannot qualify as final theories.

But for the behaviorists to react to the sins of Freudian psychoanalysis and substance dualism, by saying that the subject matter of empathic inference did not exist...

...okay, I’m sorry, but I think that even without benefit of hindsight, that’s a bit silly. Case in point of reversed stupidity is not intelligence.

Behaviorism stands beside Objectivism as one of the great historical lessons against rationalism.

Now, you do want to be careful when accusing people of “rationalism”. It seems that most of the times I hear someone accused of “rationalism”, it is typically a creationist accusing someone of “rationalism” for denying the existence of God, or a psychic believer accusing someone of “rationalism” for denying the special powers of the mind, etcetera.

But reversed stupidity is not intelligence: even if most people who launch accusations of “rationalism” are creationists and the like, this does not mean that no such error as rationalism exists. There really is a fair amount of historical insanity of various folks who thought of themselves as “rationalists”, but who mistook some correlate of rationality for its substance.

And there is a very general problem where rationalists occasionally do a thing, and people assume that this act is the very substance of the Way and you ought to do it as often as possible.

It is not the substance of the Way to reject entities about which others have said stupid things. Though sometimes, yes, people say stupid things about a thing which does not exist, and a rationalist will say “It does not exist”. It is not the Way to assert the nonexistence of that which is difficult to measure. Though sometimes, yes, that which is difficult to observe, is not there, and a rationalist will say “It is not there”. But you also have to make equally accurate predictions without the discarded concept. That part is key.

The part where you cry furiously against ancient and outmoded superstitions, the part where you mock your opponents for believing in magic, is not key. Not unless you also take care of that accurate predictions thing.

Crying “Superstition!” does play to the audience stereotype of rationality, though. And indeed real rationalists have been known to thus inveigle—often, indeed—but against gnomes, not rainbows. Knowing the difference is the difficult part! You are not automatically more hardheaded as a rationalist, the more things whose existence you deny. If it is good to deny phlogiston, it is not twice as good to also deny anger.

Added: I found it difficult to track down primary source material online, but behaviorism-as-denial-of-mental does not seem to be a straw depiction. I was able to track down at least one major behaviorist (J.B. Watson, founder of behaviorism) saying outright “There is no mind.” See my comment below.