Steelmanning the Chinese Room Argument

(This post grew out of an old con­ver­sa­tion with Wei Dai.)

Imag­ine a per­son sit­ting in a room, com­mu­ni­cat­ing with the out­side world through a ter­mi­nal. Fur­ther imag­ine that the per­son knows some se­cret fact (e.g. that the Moon land­ings were a hoax), but is ab­solutely com­mit­ted to never re­veal­ing their knowl­edge of it in any way.

Can you, by ob­serv­ing the in­put-out­put be­hav­ior of the sys­tem, dis­t­in­guish it from a per­son who doesn’t know the se­cret, or knows some other se­cret in­stead?

Clearly the only rea­son­able an­swer is “no, not in gen­eral”.

Now imag­ine a per­son in the same situ­a­tion, claiming to pos­sess some men­tal skill that’s hard for you to ver­ify (e.g. vi­su­al­iz­ing four-di­men­sional ob­jects in their mind’s eye). Can you, by ob­serv­ing the in­put-out­put be­hav­ior, dis­t­in­guish it from some­one who is ly­ing about hav­ing the skill, but has a good grasp of four-di­men­sional math oth­er­wise?

Again, clearly, the only rea­son­able an­swer is “not in gen­eral”.

Now imag­ine a sealed box that be­haves ex­actly like a hu­man, du­tifully say­ing things like “I’m con­scious”, “I ex­pe­rience red” and so on. More­over, you know from trust­wor­thy sources that the box was built by scan­ning a hu­man brain, and then op­ti­miz­ing the re­sult­ing pro­gram to use less CPU and mem­ory (pre­serv­ing the same in­put-out­put be­hav­ior). Would you be will­ing to trust that the box is in fact con­scious, and has the same in­ter­nal ex­pe­riences as the hu­man brain it was cre­ated from?

A philoso­pher be­liev­ing in com­pu­ta­tion­al­ism would em­phat­i­cally say yes. But con­sid­er­ing the ex­am­ples above, I would say I’m not sure! Not at all!