Against Modal Logics

Con­tinu­a­tion of: Grasp­ing Slip­pery Things
Fol­lowup to: Pos­si­bil­ity and Could-ness, Three Fal­la­cies of Teleology

When I try to hit a re­duc­tion prob­lem, what usu­ally hap­pens is that I “bounce”—that’s what I call it. There’s an al­most tan­gible feel to the failure, once you ab­stract and gen­er­al­ize and rec­og­nize it. Look­ing back, it seems that I man­aged to say most of what I had in mind for to­day’s post, in “Grasp­ing Slip­pery Things”. The “bounce” is when you try to an­a­lyze a word like could, or a no­tion like pos­si­bil­ity, and end up say­ing, “The set of re­al­iz­able wor­lds [A’] that fol­lows from an ini­tial start­ing world A op­er­ated on by a set of phys­i­cal laws f.” Where re­al­iz­able con­tains the full mys­tery of “pos­si­ble”—but you’ve made it into a ba­sic sym­bol, and added some other sym­bols: the illu­sion of for­mal­ity.

There are a num­ber of rea­sons why I feel that mod­ern philos­o­phy, even an­a­lytic philos­o­phy, has gone astray—so far astray that I sim­ply can’t make use of their years and years of ded­i­cated work, even when they would seem to be ask­ing ques­tions closely akin to mine.

The pro­lifer­a­tion of modal log­ics in philos­o­phy is a good illus­tra­tion of one ma­jor rea­son: Modern philos­o­phy doesn’t en­force re­duc­tion­ism, or even strive for it.

Most philoso­phers, as one would ex­pect from Stur­geon’s Law, are not very good. Which means that they’re not even close to the level of com­pe­tence it takes to an­a­lyze men­tal­is­tic black boxes into cog­ni­tive al­gorithms. Re­duc­tion­ism is, in mod­ern times, an un­usual tal­ent. In­sights on the or­der of Pearl et. al.’s re­duc­tion of causal­ity or Ju­lian Bar­bour’s re­duc­tion of time are rare.

So what these philoso­phers do in­stead, is “bounce” off the prob­lem into a new modal logic: A logic with sym­bols that em­body the mys­te­ri­ous, opaque, un­opened black box. A logic with prim­i­tives like “pos­si­ble” or “nec­es­sary”, to mark the places where the philoso­pher’s brain makes an in­ter­nal func­tion call to cog­ni­tive al­gorithms as yet un­known.

And then they pub­lish it and say, “Look at how pre­cisely I have defined my lan­guage!”

In the Wittgen­stei­nian era, philos­o­phy has been about lan­guage—about try­ing to give pre­cise mean­ing to terms.

The kind of work that I try to do is not about lan­guage. It is about re­duc­ing men­tal­is­tic mod­els to purely causal mod­els, about open­ing up black boxes to find com­pli­cated al­gorithms in­side, about dis­solv­ing mys­ter­ies—in a word, about cog­ni­tive sci­ence.

That’s what I think post-Wittgen­stei­nian philos­o­phy should be about—cog­ni­tive sci­ence.

But this kind of re­duc­tion­ism is hard work. Ideally, you’re look­ing for in­sights on the or­der of Ju­lian Bar­bour’s Machi­anism, to re­duce time to non-time; in­sights on the or­der of Judea Pearl’s con­di­tional in­de­pen­dence, to give a math­e­mat­i­cal struc­ture to causal­ity that isn’t just find­ing a new way to say “be­cause”; in­sights on the or­der of Bayesi­anism, to show that there is a unique struc­ture to un­cer­tainty ex­pressed quan­ti­ta­tively.

Just to make it clear that I’m not claiming a mag­i­cal and unique abil­ity, I would name Gary Drescher’s Good and Real as an ex­am­ple of a philo­soph­i­cal work that is com­men­su­rate with the kind of think­ing I have to try to do. Gary Drescher is an AI re­searcher turned philoso­pher, which may ex­plain why he un­der­stands the art of ask­ing, not What does this term mean?, but What cog­ni­tive al­gorithm, as seen from the in­side, would gen­er­ate this ap­par­ent mys­tery?

(I paused while read­ing the first chap­ter of G&R. It was im­me­di­ately ap­par­ent that Drescher was think­ing along lines so close to my­self, that I wanted to write up my own in­de­pen­dent com­po­nent be­fore look­ing at his—I didn’t want his way of phras­ing things to take over my writ­ing. Now that I’m done with zom­bies and metaethics, G&R is next up on my read­ing list.)

Con­sider the pop­u­lar philo­soph­i­cal no­tion of “pos­si­ble wor­lds”. Have you ever seen a pos­si­ble world? Is an elec­tron ei­ther “pos­si­ble” or “nec­es­sary”?Clearly, if you are talk­ing about “pos­si­bil­ity” and “ne­ces­sity”, you are talk­ing about things that are not com­men­su­rate with elec­trons - which means that you’re still deal­ing with a world as seen from the in­ner sur­face of a cog­ni­tive al­gorithm, a world of sur­face lev­ers with all the un­der­ly­ing ma­chin­ery hid­den.

I have to make an AI out of elec­trons, in this one ac­tual world. I can’t make the AI out of pos­si­bil­ity-stuff, be­cause I can’t or­der a pos­si­ble tran­sis­tor. If the AI ever thinks about pos­si­bil­ity, it’s not go­ing to be be­cause the AI no­ticed a pos­si­ble world in its closet. It’s go­ing to be be­cause the non-on­tolog­i­cally-fun­da­men­tal con­struct of “pos­si­bil­ity” turns out to play a use­ful role in mod­el­ing and ma­nipu­lat­ing the one real world, a world that does not con­tain any fun­da­men­tally pos­si­ble things. Which is to say that al­gorithms which make use of a “pos­si­bil­ity” la­bel, ap­plied at cer­tain points, will turn out to cap­ture an ex­ploitable reg­u­lar­ity of the one real world. This is the kind of knowl­edge that Judea Pearl writes about. This is the kind of knowl­edge that AI re­searchers need. It is not the kind of knowl­edge that mod­ern philos­o­phy holds it­self to the stan­dard of hav­ing gen­er­ated, be­fore a philoso­pher gets credit for hav­ing writ­ten a pa­per.

Philoso­phers keep tel­ling me that I should look at philos­o­phy. I have, ev­ery now and then. But the main rea­son I look at philos­o­phy is when I find it de­sir­able to ex­plain things to philoso­phers. The work that has been done—the prod­ucts of these decades of mod­ern de­bate—is, by and large, just not com­men­su­rate with the kind of anal­y­sis AI needs. I feel a bit awful about say­ing this, be­cause it feels like I’m tel­ling philoso­phers that their life’s work has been a waste of time—not that pro­fes­sional philoso­phers would be likely to re­gard me as an au­thor­ity on whose life has been a waste of time. But if there’s any cen­tral­ized repos­i­tory of re­duc­tion­ist-grade nat­u­ral­is­tic cog­ni­tive philos­o­phy, I’ve never heard men­tion of it.

And: Philos­o­phy is just not ori­ented to the out­look of some­one who needs to re­solve the is­sue, im­ple­ment the cor­re­spond­ing solu­tion, and then find out—pos­si­bly fatally—whether they got it right or wrong. Philos­o­phy doesn’t re­solve things, it com­piles po­si­tions and ar­gu­ments. And if the de­bate about zom­bies is still con­sid­ered open, then I’m sorry, but as Jeffreys­sai says: Too slow! It would be one mat­ter if I could just look up the stan­dard an­swer and find that, lo and be­hold, it is cor­rect. But philos­o­phy, which hasn’t come to con­clu­sions and moved on from cog­ni­tive re­duc­tions that I re­gard as rel­a­tively sim­ple, doesn’t seem very likely to build com­plex cor­rect struc­tures of con­clu­sions.

Sorry—but philos­o­phy, even the bet­ter grade of mod­ern an­a­lytic philos­o­phy, doesn’t seem to end up com­men­su­rate with what I need, ex­cept by ac­ci­dent or by ex­traor­di­nary com­pe­tence. Parfit comes to mind; and I haven’t read much Den­nett, but Den­nett does seem to be try­ing to do the same sort of thing that I try to do; and of course there’s Gary Drescher. If there was a repos­i­tory of philo­soph­i­cal work along those lines—not con­cerned with defend­ing ba­sic ideas like anti-zom­bieism, but with ac­cept­ing those ba­sic ideas and mov­ing on to challenge more difficult quests of nat­u­ral­ism and cog­ni­tive re­duc­tion­ism—then that, I might well be in­ter­ested in read­ing. But I don’t know who, be­sides a few heroes, would be able to com­pile such a repos­i­tory—who else would see a modal logic as an ob­vi­ous bounce-off-the-mys­tery.