Angry Atoms

Fun­da­men­tal physics—quarks ’n stuff—is far re­moved from the lev­els we can see, like hands and fingers. At best, you can know how to repli­cate the ex­per­i­ments which show that your hand (like ev­ery­thing else) is com­posed of quarks, and you may know how to de­rive a few equa­tions for things like atoms and elec­tron clouds and molecules.

At worst, the ex­is­tence of quarks be­neath your hand may just be some­thing you were told. In which case it’s ques­tion­able in one what sense you can be said to “know” it at all, even if you re­peat back the same word “quark” that a physi­cist would use to con­vey knowl­edge to an­other physi­cist.

Either way, you can’t ac­tu­ally see the iden­tity be­tween lev­els—no one has a brain large enough to vi­su­al­ize avo­gadros of quarks and rec­og­nize a hand-pat­tern in them.

But we at least un­der­stand what hands do. Hands push on things, ex­ert forces on them. When we’re told about atoms, we vi­su­al­ize lit­tle billiard balls bump­ing into each other. This makes it seem ob­vi­ous that “atoms” can push on things too, by bump­ing into them.

Now this no­tion of atoms is not quite cor­rect. But so far as hu­man imag­i­na­tion goes, it’s rel­a­tively easy to imag­ine our hand be­ing made up of a lit­tle galaxy of swirling billiard balls, push­ing on things when our “fingers” touch them. Dem­ocri­tus imag­ined this 2400 years ago, and there was a time, roughly 1803-1922, when Science thought he was right.

But what about, say, anger?

How could lit­tle billiard balls be an­gry? Tiny frowny faces on the billiard balls?

Put your­self in the shoes of, say, a hunter-gath­erer—some­one who may not even have a no­tion of writ­ing, let alone the no­tion of us­ing base mat­ter to perform com­pu­ta­tions—some­one who has no idea that such a thing as neu­rons ex­ist. Then you can imag­ine the func­tional gap that your an­ces­tors might have per­ceived be­tween billiard balls and “Grrr! Aaarg!”

For­get about sub­jec­tive ex­pe­rience for the mo­ment, and con­sider the sheer be­hav­ioral gap be­tween anger and billiard balls. The differ­ence be­tween what lit­tle billiard balls do, and what anger makes peo­ple do. Anger can make peo­ple raise their fists and hit some­one—or say snide things be­hind their backs—or plant scor­pi­ons in their tents at night. Billiard balls just push on things.

Try to put your­self in the shoes of the hunter-gath­erer who’s never had the “Aha!” of in­for­ma­tion-pro­cess­ing. Try to avoid hind­sight bias about things like neu­rons and com­put­ers. Only then will you be able to see the un­cross­able ex­plana­tory gap:

How can you ex­plain an­gry be­hav­ior in terms of billiard balls?

Well, the ob­vi­ous ma­te­ri­al­ist con­jec­ture is that the lit­tle billiard balls push on your arm and make you hit some­one, or push on your tongue so that in­sults come out.

But how do the lit­tle billiard balls know how to do this—or how to guide your tongue and fingers through long-term plots—if they aren’t an­gry them­selves?

And be­sides, if you’re not se­duced by—gasp!—sci­en­tism, you can see from a first-per­son per­spec­tive that this ex­pla­na­tion is ob­vi­ously false. Atoms can push on your arm, but they can’t make you want any­thing.

Some­one may point out that drink­ing wine can make you an­gry. But who says that wine is made ex­clu­sively of lit­tle billiard balls? Maybe wine just con­tains a po­tency of anger­ness.

Clearly, re­duc­tion­ism is just a flawed no­tion.

(The novice goes astray and says “The art failed me”; the mas­ter goes astray and says “I failed my art.”)

What does it take to cross this gap? It’s not just the idea of “neu­rons” that “pro­cess in­for­ma­tion”—if you say only this and noth­ing more, it just in­serts a mag­i­cal, un­ex­plained level-cross­ing rule into your model, where you go from billiards to thoughts.

But an Ar­tifi­cial In­tel­li­gence pro­gram­mer who knows how to cre­ate a chess-play­ing pro­gram out of base mat­ter, has taken a gen­uine step to­ward cross­ing the gap. If you un­der­stand con­cepts like con­se­quen­tial­ism, back­ward chain­ing, util­ity func­tions, and search trees, you can make merely causal/​me­chan­i­cal sys­tems com­pute plans.

The trick goes some­thing like this: For each pos­si­ble chess move, com­pute the moves your op­po­nent could make, then your re­sponses to those moves, and so on; eval­u­ate the fur­thest po­si­tion you can see us­ing some lo­cal al­gorithm (you might sim­ply count up the ma­te­rial); then trace back us­ing min­i­max to find the best move on the cur­rent board; then make that move.

More gen­er­ally: If you have chains of causal­ity in­side the mind that have a kind of map­ping—a mir­ror, an echo—to what goes on in the en­vi­ron­ment, then you can run a util­ity func­tion over the end prod­ucts of imag­i­na­tion, and find an ac­tion that achieves some­thing which the util­ity func­tion rates highly, and out­put that ac­tion. It is not nec­es­sary for the chains of causal­ity in­side the mind, that are similar to the en­vi­ron­ment, to be made out of billiard balls that have lit­tle auras of in­ten­tion­al­ity. Deep Blue’s tran­sis­tors do not need lit­tle chess pieces carved on them, in or­der to work. See also The Sim­ple Truth.

All this is still tremen­dously over­sim­plified, but it should, at least, re­duce the ap­par­ent length of the gap. If you can un­der­stand all that, you can see how a plan­ner built out of base mat­ter can be in­fluenced by al­co­hol to out­put more an­gry be­hav­iors. The billiard balls in the al­co­hol push on the billiard balls mak­ing up the util­ity func­tion.

But even if you know how to write small AIs, you can’t vi­su­al­ize the level-cross­ing be­tween tran­sis­tors and chess. There are too many tran­sis­tors, and too many moves to check.

Like­wise, even if you knew all the facts of neu­rol­ogy, you would not be able to vi­su­al­ize the level-cross­ing be­tween neu­rons and anger—let alone the level-cross­ing be­tween atoms and anger. Not the way you can vi­su­al­ize a hand con­sist­ing of fingers, thumb, and palm.

And sup­pose a cog­ni­tive sci­en­tist just flatly tells you “Anger is hor­mones”? Even if you re­peat back the words, it doesn’t mean you’ve crossed the gap. You may be­lieve you be­lieve it, but that’s not the same as un­der­stand­ing what lit­tle billiard balls have to do with want­ing to hit some­one.

So you come up with in­ter­pre­ta­tions like, “Anger is mere hor­mones, it’s caused by lit­tle molecules, so it must not be jus­tified in any moral sense—that’s why you should learn to con­trol your anger.”

Or, “There isn’t re­ally any such thing as anger—it’s an illu­sion, a quo­ta­tion with no refer­ent, like a mirage of wa­ter in the desert, or look­ing in the garage for a dragon and not find­ing one.”

Th­ese are both tough pills to swal­low (not that you should swal­low them) and so it is a good eas­ier to pro­fess them than to be­lieve them.

I think this is what non-re­duc­tion­ists/​non-ma­te­ri­al­ists think they are crit­i­ciz­ing when they crit­i­cize re­duc­tive ma­te­ri­al­ism.

But ma­te­ri­al­ism isn’t that easy. It’s not as cheap as say­ing, “Anger is made out of atoms—there, now I’m done.” That wouldn’t ex­plain how to get from billiard balls to hit­ting. You need the spe­cific in­sights of com­pu­ta­tion, con­se­quen­tial­ism, and search trees be­fore you can start to close the ex­plana­tory gap.

All this was a rel­a­tively easy ex­am­ple by mod­ern stan­dards, be­cause I re­stricted my­self to talk­ing about an­gry be­hav­iors. Talk­ing about out­puts doesn’t re­quire you to ap­pre­ci­ate how an al­gorithm feels from in­side (cross a first-per­son/​third-per­son gap) or dis­solve a wrong ques­tion (un­tan­gle places where the in­te­rior of your own mind runs skew to re­al­ity).

Go­ing from ma­te­rial sub­stances that bend and break, burn and fall, push and shove, to an­gry be­hav­ior, is just a prac­tice prob­lem by the stan­dards of mod­ern philos­o­phy. But it is an im­por­tant prac­tice prob­lem. It can only be fully ap­pre­ci­ated, if you re­al­ize how hard it would have been to solve be­fore writ­ing was in­vented. There was once an ex­plana­tory gap here—though it may not seem that way in hind­sight, now that it’s been bridged for gen­er­a­tions.

Ex­plana­tory gaps can be crossed, if you ac­cept help from sci­ence, and don’t trust the view from the in­te­rior of your own mind.