Reductionism

Al­most one year ago, in April 2007, Matthew C sub­mit­ted the fol­low­ing sug­ges­tion for an Over­com­ing Bias topic:

“How and why the cur­rent reign­ing philo­soph­i­cal hege­mon (re­duc­tion­is­tic ma­te­ri­al­ism) is ob­vi­ously cor­rect [...], while the reign­ing philo­soph­i­cal view­points of all past so­cieties and civ­i­liza­tions are ob­vi­ously sus­pect—”

I re­mem­ber this, be­cause I looked at the re­quest and deemed it le­gi­t­i­mate, but I knew I couldn’t do that topic un­til I’d started on the Mind Pro­jec­tion Fal­lacy se­quence, which wouldn’t be for a while...

But now it’s time to be­gin ad­dress­ing this ques­tion. And while I haven’t yet come to the “ma­te­ri­al­ism” is­sue, we can now start on “re­duc­tion­ism”.

First, let it be said that I do in­deed hold that “re­duc­tion­ism”, ac­cord­ing to the mean­ing I will give for that word, is ob­vi­ously cor­rect; and to perdi­tion with any past civ­i­liza­tions that dis­agreed.

This seems like a strong state­ment, at least the first part of it. Gen­eral Rel­a­tivity seems well-sup­ported, yet who knows but that some fu­ture physi­cist may over­turn it?

On the other hand, we are never go­ing back to New­to­nian me­chan­ics. The ratchet of sci­ence turns, but it does not turn in re­verse. There are cases in sci­en­tific his­tory where a the­ory suffered a wound or two, and then bounced back; but when a the­ory takes as many ar­rows through the chest as New­to­nian me­chan­ics, it stays dead.

“To hell with what past civ­i­liza­tions thought” seems safe enough, when past civ­i­liza­tions be­lieved in some­thing that has been falsified to the trash heap of his­tory.

And re­duc­tion­ism is not so much a pos­i­tive hy­poth­e­sis, as the ab­sence of be­lief—in par­tic­u­lar, dis­be­lief in a form of the Mind Pro­jec­tion Fal­lacy.

I once met a fel­low who claimed that he had ex­pe­rience as a Navy gun­ner, and he said, “When you fire ar­tillery shells, you’ve got to com­pute the tra­jec­to­ries us­ing New­to­nian me­chan­ics. If you com­pute the tra­jec­to­ries us­ing rel­a­tivity, you’ll get the wrong an­swer.”

And I, and an­other per­son who was pre­sent, said flatly, “No.” I added, “You might not be able to com­pute the tra­jec­to­ries fast enough to get the an­swers in time—maybe that’s what you mean? But the rel­a­tivis­tic an­swer will always be more ac­cu­rate than the New­to­nian one.”

“No,” he said, “I mean that rel­a­tivity will give you the wrong an­swer, be­cause things mov­ing at the speed of ar­tillery shells are gov­erned by New­to­nian me­chan­ics, not rel­a­tivity.”

“If that were re­ally true,” I replied, “you could pub­lish it in a physics jour­nal and col­lect your No­bel Prize.”

Stan­dard physics uses the same fun­da­men­tal the­ory to de­scribe the flight of a Boe­ing 747 air­plane, and col­li­sions in the Rel­a­tivis­tic Heavy Ion Col­lider. Nu­clei and air­planes al­ike, ac­cord­ing to our un­der­stand­ing, are obey­ing spe­cial rel­a­tivity, quan­tum me­chan­ics, and chro­mo­dy­nam­ics.

But we use en­tirely differ­ent mod­els to un­der­stand the aero­dy­nam­ics of a 747 and a col­li­sion be­tween gold nu­clei in the RHIC. A com­puter mod­el­ing the aero­dy­nam­ics of a 747 may not con­tain a sin­gle to­ken, a sin­gle bit of RAM, that rep­re­sents a quark.

So is the 747 made of some­thing other than quarks? No, you’re just mod­el­ing it with rep­re­sen­ta­tional el­e­ments that do not have a one-to-one cor­re­spon­dence with the quarks of the 747. The map is not the ter­ri­tory.

Why not model the 747 with a chro­mo­dy­namic rep­re­sen­ta­tion? Be­cause then it would take a gazillion years to get any an­swers out of the model. Also we could not store the model on all the mem­ory on all the com­put­ers in the world, as of 2008.

As the say­ing goes, “The map is not the ter­ri­tory, but you can’t fold up the ter­ri­tory and put it in your glove com­part­ment.” Some­times you need a smaller map to fit in a more cramped glove com­part­ment—but this does not change the ter­ri­tory. The scale of a map is not a fact about the ter­ri­tory, it’s a fact about the map.

If it were pos­si­ble to build and run a chro­mo­dy­namic model of the 747, it would yield ac­cu­rate pre­dic­tions. Bet­ter pre­dic­tions than the aero­dy­namic model, in fact.

To build a fully ac­cu­rate model of the 747, it is not nec­es­sary, in prin­ci­ple, for the model to con­tain ex­plicit de­scrip­tions of things like airflow and lift. There does not have to be a sin­gle to­ken, a sin­gle bit of RAM, that cor­re­sponds to the po­si­tion of the wings. It is pos­si­ble, in prin­ci­ple, to build an ac­cu­rate model of the 747 that makes no men­tion of any­thing ex­cept el­e­men­tary par­ti­cle fields and fun­da­men­tal forces.

“What?” cries the an­tire­duc­tion­ist. “Are you tel­ling me the 747 doesn’t re­ally have wings? I can see the wings right there!”

The no­tion here is a sub­tle one. It’s not just the no­tion that an ob­ject can have differ­ent de­scrip­tions at differ­ent lev­els.

It’s the no­tion that “hav­ing differ­ent de­scrip­tions at differ­ent lev­els” is it­self some­thing you say that be­longs in the realm of Talk­ing About Maps, not the realm of Talk­ing About Ter­ri­tory.

It’s not that the air­plane it­self, the laws of physics them­selves, use differ­ent de­scrip­tions at differ­ent lev­els—as yon­der ar­tillery gun­ner thought. Rather we, for our con­ve­nience, use differ­ent sim­plified mod­els at differ­ent lev­els.

If you looked at the ul­ti­mate chro­mo­dy­namic model, the one that con­tained only el­e­men­tary par­ti­cle fields and fun­da­men­tal forces, that model would con­tain all the facts about airflow and lift and wing po­si­tions—but these facts would be im­plicit, rather than ex­plicit.

You, look­ing at the model, and think­ing about the model, would be able to figure out where the wings were. Hav­ing figured it out, there would be an ex­plicit rep­re­sen­ta­tion in your mind of the wing po­si­tion—an ex­plicit com­pu­ta­tional ob­ject, there in your neu­ral RAM. In your mind.

You might, in­deed, de­duce all sorts of ex­plicit de­scrip­tions of the air­plane, at var­i­ous lev­els, and even ex­plicit rules for how your mod­els at differ­ent lev­els in­ter­acted with each other to pro­duce com­bined pre­dic­tions—

And the way that al­gorithm feels from in­side, is that the air­plane would seem to be made up of many lev­els at once, in­ter­act­ing with each other.

The way a be­lief feels from in­side, is that you seem to be look­ing straight at re­al­ity. When it ac­tu­ally seems that you’re look­ing at a be­lief, as such, you are re­ally ex­pe­rienc­ing a be­lief about be­lief.

So when your mind si­mul­ta­neously be­lieves ex­plicit de­scrip­tions of many differ­ent lev­els, and be­lieves ex­plicit rules for tran­sit­ing be­tween lev­els, as part of an effi­cient com­bined model, it feels like you are see­ing a sys­tem that is made of differ­ent level de­scrip­tions and their rules for in­ter­ac­tion.

But this is just the brain try­ing to be effi­ciently com­press an ob­ject that it can­not re­motely be­gin to model on a fun­da­men­tal level. The air­plane is too large. Even a hy­dro­gen atom would be too large. Quark-to-quark in­ter­ac­tions are in­sanely in­tractable. You can’t han­dle the truth.

But the way physics re­ally works, as far as we can tell, is that there is only the most ba­sic level—the el­e­men­tary par­ti­cle fields and fun­da­men­tal forces. You can’t han­dle the raw truth, but re­al­ity can han­dle it with­out the slight­est sim­plifi­ca­tion. (I wish I knew where Real­ity got its com­put­ing power.)

The laws of physics do not con­tain dis­tinct ad­di­tional causal en­tities that cor­re­spond to lift or air­plane wings, the way that the mind of an en­g­ineer con­tains dis­tinct ad­di­tional cog­ni­tive en­tities that cor­re­spond to lift or air­plane wings.

This, as I see it, is the the­sis of re­duc­tion­ism. Re­duc­tion­ism is not a pos­i­tive be­lief, but rather, a dis­be­lief that the higher lev­els of sim­plified mul­ti­level mod­els are out there in the ter­ri­tory. Un­der­stand­ing this on a gut level dis­solves the ques­tion of “How can you say the air­plane doesn’t re­ally have wings, when I can see the wings right there?” The crit­i­cal words are re­ally and see.