Searle’s Chinese Room and the Meaning of Meaning

In response to the question of how a general intelligence could be recognised, Alan Turing proposed the following empirical test: Any entity that could interact with an investigator, fooling her into thinking it was a person, would be ascribed intelligence.

Searle’s Chinese room thought experiment rejects Turing’s test, denying that a computer could under any circumstances be said to have intelligence. Searle compared a computer’s actions with those of a technician whose job it is to respond to messages presented in some unfamiliar script. The technician consults a list of procedures and executes some prescribed action. Searle thought that the actions of a computer were necessarily comparable to those of the technician, denying that any understanding was taking place.

I want to challenge Searle’s contention by arguing that his assumptions about the capabilities of a general intelligence are far too stunted.


Humans start off life just like the technician; confronted with a stream of nearly incoherent inputs (unintelligible sound waves, patterns of light and dark; or, alternatively: neural activity). In a sense, we are worse off, since we initially don’t have much of a repertoire of procedures to guide our behaviour. But we do have one advantage. Namely, the ability to learn.

A baby tries first one thing, then another, receiving uninterrupted feedback from his environment. With experience, the newborn learns the importance of context: it mostly pays off to reach into the cookie bag, but not when there are signs of a hungry animal inside.

What is our baby doing in his explorations? He is building a complex control system, with a dense matrix of inhibitory and activating responses. When the control system reaches a certain arbitrary threshold, we say our (no longer) baby has achieved intelligence.

Now come back to the technician/​computer program. Instead of having him rely slavishly on a list of procedures, let’s have him initially respond to assignments on the basis of some bare heuristics. In response to a request, he initiates some behaviour and is rewarded or punished. He takes note of the effects, as well as the context, and sets up hypotheses accounting for the outcomes. This is all within the range of possibilities for computer programs.

Our technician continues refining his hypothesis and, over time, gets quite good at obtaining rewards and avoiding punishments. An advisor could speed the process by getting him to correct his hypotheses. But, given enough experience, the technician is able to perfect his skills on his own.

Has our technician mastered Chinese? Absolutely! If you’ll don’t agree, tell me what you think is still missing.

Just like a human, the computer program achieves intelligence once it passes some threshold of complexity in successfully navigating its environment. Intelligence is a matter of the range and density of his hypothesis. It has nothing to do with its realisation in a biological organism.

Why is Searle’s Chinese Room so persuasive and how did it fool people for so long? Searle rigged his analogy to in two important ways:

- He inappropriately treats intelligence as a discrete variable, then assumes an extremely narrow range of inputs and outputs where learning is impossible.

- He plays upon normal human anxieties about being compared with an inanimate object, scaring potential critics from questioning his analogy.