How An Algorithm Feels From Inside

“If a tree falls in the for­est, and no one hears it, does it make a sound?” I re­mem­ber see­ing an ac­tual ar­gu­ment get started on this sub­ject—a fully naive ar­gu­ment that went nowhere near Berkeleyan sub­jec­tivism. Just:

“It makes a sound, just like any other fal­ling tree!”
″But how can there be a sound that no one hears?”

The stan­dard ra­tio­nal­ist view would be that the first per­son is speak­ing as if “sound” means acous­tic vibra­tions in the air; the sec­ond per­son is speak­ing as if “sound” means an au­di­tory ex­pe­rience in a brain. If you ask “Are there acous­tic vibra­tions?” or “Are there au­di­tory ex­pe­riences?”, the an­swer is at once ob­vi­ous. And so the ar­gu­ment is re­ally about the defi­ni­tion of the word “sound”.

I think the stan­dard anal­y­sis is es­sen­tially cor­rect. So let’s ac­cept that as a premise, and ask: Why do peo­ple get into such an ar­gu­ment? What’s the un­der­ly­ing psy­chol­ogy?

A key idea of the heuris­tics and bi­ases pro­gram is that mis­takes are of­ten more re­veal­ing of cog­ni­tion than cor­rect an­swers. Get­ting into a heated dis­pute about whether, if a tree falls in a de­serted for­est, it makes a sound, is tra­di­tion­ally con­sid­ered a mis­take.

So what kind of mind de­sign cor­re­sponds to that er­ror?

In Dis­guised Queries I in­tro­duced the blegg/​rube clas­sifi­ca­tion task, in which Su­san the Se­nior Sorter ex­plains that your job is to sort ob­jects com­ing off a con­veyor belt, putting the blue eggs or “bleggs” into one bin, and the red cubes or “rubes” into the rube bin. This, it turns out, is be­cause bleggs con­tain small nuggets of vana­dium ore, and rubes con­tain small shreds of pal­la­dium, both of which are use­ful in­dus­tri­ally.

Ex­cept that around 2% of blue egg-shaped ob­jects con­tain pal­la­dium in­stead. So if you find a blue egg-shaped thing that con­tains pal­la­dium, should you call it a “rube” in­stead? You’re go­ing to put it in the rube bin—why not call it a “rube”?

But when you switch off the light, nearly all bleggs glow faintly in the dark. And blue egg-shaped ob­jects that con­tain pal­la­dium are just as likely to glow in the dark as any other blue egg-shaped ob­ject.

So if you find a blue egg-shaped ob­ject that con­tains pal­la­dium, and you ask “Is it a blegg?”, the an­swer de­pends on what you have to do with the an­swer: If you ask “Which bin does the ob­ject go in?”, then you choose as if the ob­ject is a rube. But if you ask “If I turn off the light, will it glow?”, you pre­dict as if the ob­ject is a blegg. In one case, the ques­tion “Is it a blegg?” stands in for the dis­guised query, “Which bin does it go in?”. In the other case, the ques­tion “Is it a blegg?” stands in for the dis­guised query, “Will it glow in the dark?”

Now sup­pose that you have an ob­ject that is blue and egg-shaped and con­tains pal­la­dium; and you have already ob­served that it is furred, flex­ible, opaque, and glows in the dark.

This an­swers ev­ery query, ob­serves ev­ery ob­serv­able in­tro­duced. There’s noth­ing left for a dis­guised query to stand for.

So why might some­one feel an im­pulse to go on ar­gu­ing whether the ob­ject is re­ally a blegg?

Blegg3

This di­a­gram from Neu­ral Cat­e­gories shows two differ­ent neu­ral net­works that might be used to an­swer ques­tions about bleggs and rubes. Net­work 1 has a num­ber of dis­ad­van­tages—such as po­ten­tially os­cillat­ing/​chaotic be­hav­ior, or re­quiring O(N2) con­nec­tions—but Net­work 1′s struc­ture does have one ma­jor ad­van­tage over Net­work 2: Every unit in the net­work cor­re­sponds to a testable query. If you ob­serve ev­ery ob­serv­able, clamp­ing ev­ery value, there are no units in the net­work left over.

Net­work 2, how­ever, is a far bet­ter can­di­date for be­ing some­thing vaguely like how the hu­man brain works: It’s fast, cheap, scal­able—and has an ex­tra dan­gling unit in the cen­ter, whose ac­ti­va­tion can still vary, even af­ter we’ve ob­served ev­ery sin­gle one of the sur­round­ing nodes.

Which is to say that even af­ter you know whether an ob­ject is blue or red, egg or cube, furred or smooth, bright or dark, and whether it con­tains vana­dium or pal­la­dium, it feels like there’s a lef­tover, unan­swered ques­tion: But is it re­ally a blegg?

Usu­ally, in our daily ex­pe­rience, acous­tic vibra­tions and au­di­tory ex­pe­rience go to­gether. But a tree fal­ling in a de­serted for­est un­bun­dles this com­mon as­so­ci­a­tion. And even af­ter you know that the fal­ling tree cre­ates acous­tic vibra­tions but not au­di­tory ex­pe­rience, it feels like there’s a lef­tover ques­tion: Did it make a sound?

We know where Pluto is, and where it’s go­ing; we know Pluto’s shape, and Pluto’s mass—but is it a planet?

Now re­mem­ber: When you look at Net­work 2, as I’ve laid it out here, you’re see­ing the al­gorithm from the out­side. Peo­ple don’t think to them­selves, “Should the cen­tral unit fire, or not?” any more than you think “Should neu­ron #12,234,320,242 in my vi­sual cor­tex fire, or not?”

It takes a de­liber­ate effort to vi­su­al­ize your brain from the out­side—and then you still don’t see your ac­tual brain; you imag­ine what you think is there, hope­fully based on sci­ence, but re­gard­less, you don’t have any di­rect ac­cess to neu­ral net­work struc­tures from in­tro­spec­tion. That’s why the an­cient Greeks didn’t in­vent com­pu­ta­tional neu­ro­science.

When you look at Net­work 2, you are see­ing from the out­side; but the way that neu­ral net­work struc­ture feels from the in­side, if you your­self are a brain run­ning that al­gorithm, is that even af­ter you know ev­ery char­ac­ter­is­tic of the ob­ject, you still find your­self won­der­ing: “But is it a blegg, or not?”

This is a great gap to cross, and I’ve seen it stop peo­ple in their tracks. Be­cause we don’t in­stinc­tively see our in­tu­itions as “in­tu­itions”, we just see them as the world. When you look at a green cup, you don’t think of your­self as see­ing a pic­ture re­con­structed in your vi­sual cor­tex—al­though that is what you are see­ing—you just see a green cup. You think, “Why, look, this cup is green,” not, “The pic­ture in my vi­sual cor­tex of this cup is green.”

And in the same way, when peo­ple ar­gue over whether the fal­ling tree makes a sound, or whether Pluto is a planet, they don’t see them­selves as ar­gu­ing over whether a cat­e­go­riza­tion should be ac­tive in their neu­ral net­works. It seems like ei­ther the tree makes a sound, or not.

We know where Pluto is, and where it’s go­ing; we know Pluto’s shape, and Pluto’s mass—but is it a planet? And yes, there were peo­ple who said this was a fight over defi­ni­tions—but even that is a Net­work 2 sort of per­spec­tive, be­cause you’re ar­gu­ing about how the cen­tral unit ought to be wired up. If you were a mind con­structed along the lines of Net­work 1, you wouldn’t say “It de­pends on how you define ‘planet’,” you would just say, “Given that we know Pluto’s or­bit and shape and mass, there is no ques­tion left to ask.” Or, rather, that’s how it would feel—it would feel like there was no ques­tion left—if you were a mind con­structed along the lines of Net­work 1.

Be­fore you can ques­tion your in­tu­itions, you have to re­al­ize that what your mind’s eye is look­ing at is an in­tu­ition—some cog­ni­tive al­gorithm, as seen from the in­side—rather than a di­rect per­cep­tion of the Way Things Really Are.

Peo­ple cling to their in­tu­itions, I think, not so much be­cause they be­lieve their cog­ni­tive al­gorithms are perfectly re­li­able, but be­cause they can’t see their in­tu­itions as the way their cog­ni­tive al­gorithms hap­pen to look from the in­side.

And so ev­ery­thing you try to say about how the na­tive cog­ni­tive al­gorithm goes astray, ends up be­ing con­trasted to their di­rect per­cep­tion of the Way Things Really Are—and dis­carded as ob­vi­ously wrong.