Against Modal Logics

Continuation of: Grasping Slippery Things
Followup to: Possibility and Could-ness, Three Fallacies of Teleology

When I try to hit a reduction problem, what usually happens is that I “bounce”—that’s what I call it. There’s an almost tangible feel to the failure, once you abstract and generalize and recognize it. Looking back, it seems that I managed to say most of what I had in mind for today’s post, in “Grasping Slippery Things”. The “bounce” is when you try to analyze a word like could, or a notion like possibility, and end up saying, “The set of realizable worlds [A’] that follows from an initial starting world A operated on by a set of physical laws f.” Where realizable contains the full mystery of “possible”—but you’ve made it into a basic symbol, and added some other symbols: the illusion of formality.

There are a number of reasons why I feel that modern philosophy, even analytic philosophy, has gone astray—so far astray that I simply can’t make use of their years and years of dedicated work, even when they would seem to be asking questions closely akin to mine.

The proliferation of modal logics in philosophy is a good illustration of one major reason: Modern philosophy doesn’t enforce reductionism, or even strive for it.

Most philosophers, as one would expect from Sturgeon’s Law, are not very good. Which means that they’re not even close to the level of competence it takes to analyze mentalistic black boxes into cognitive algorithms. Reductionism is, in modern times, an unusual talent. Insights on the order of Pearl et. al.’s reduction of causality or Julian Barbour’s reduction of time are rare.

So what these philosophers do instead, is “bounce” off the problem into a new modal logic: A logic with symbols that embody the mysterious, opaque, unopened black box. A logic with primitives like “possible” or “necessary”, to mark the places where the philosopher’s brain makes an internal function call to cognitive algorithms as yet unknown.

And then they publish it and say, “Look at how precisely I have defined my language!”

In the Wittgensteinian era, philosophy has been about language—about trying to give precise meaning to terms.

The kind of work that I try to do is not about language. It is about reducing mentalistic models to purely causal models, about opening up black boxes to find complicated algorithms inside, about dissolving mysteries—in a word, about cognitive science.

That’s what I think post-Wittgensteinian philosophy should be about—cognitive science.

But this kind of reductionism is hard work. Ideally, you’re looking for insights on the order of Julian Barbour’s Machianism, to reduce time to non-time; insights on the order of Judea Pearl’s conditional independence, to give a mathematical structure to causality that isn’t just finding a new way to say “because”; insights on the order of Bayesianism, to show that there is a unique structure to uncertainty expressed quantitatively.

Just to make it clear that I’m not claiming a magical and unique ability, I would name Gary Drescher’s Good and Real as an example of a philosophical work that is commensurate with the kind of thinking I have to try to do. Gary Drescher is an AI researcher turned philosopher, which may explain why he understands the art of asking, not What does this term mean?, but What cognitive algorithm, as seen from the inside, would generate this apparent mystery?

(I paused while reading the first chapter of G&R. It was immediately apparent that Drescher was thinking along lines so close to myself, that I wanted to write up my own independent component before looking at his—I didn’t want his way of phrasing things to take over my writing. Now that I’m done with zombies and metaethics, G&R is next up on my reading list.)

Consider the popular philosophical notion of “possible worlds”. Have you ever seen a possible world? Is an electron either “possible” or “necessary”?Clearly, if you are talking about “possibility” and “necessity”, you are talking about things that are not commensurate with electrons - which means that you’re still dealing with a world as seen from the inner surface of a cognitive algorithm, a world of surface levers with all the underlying machinery hidden.

I have to make an AI out of electrons, in this one actual world. I can’t make the AI out of possibility-stuff, because I can’t order a possible transistor. If the AI ever thinks about possibility, it’s not going to be because the AI noticed a possible world in its closet. It’s going to be because the non-ontologically-fundamental construct of “possibility” turns out to play a useful role in modeling and manipulating the one real world, a world that does not contain any fundamentally possible things. Which is to say that algorithms which make use of a “possibility” label, applied at certain points, will turn out to capture an exploitable regularity of the one real world. This is the kind of knowledge that Judea Pearl writes about. This is the kind of knowledge that AI researchers need. It is not the kind of knowledge that modern philosophy holds itself to the standard of having generated, before a philosopher gets credit for having written a paper.

Philosophers keep telling me that I should look at philosophy. I have, every now and then. But the main reason I look at philosophy is when I find it desirable to explain things to philosophers. The work that has been done—the products of these decades of modern debate—is, by and large, just not commensurate with the kind of analysis AI needs. I feel a bit awful about saying this, because it feels like I’m telling philosophers that their life’s work has been a waste of time—not that professional philosophers would be likely to regard me as an authority on whose life has been a waste of time. But if there’s any centralized repository of reductionist-grade naturalistic cognitive philosophy, I’ve never heard mention of it.

And: Philosophy is just not oriented to the outlook of someone who needs to resolve the issue, implement the corresponding solution, and then find out—possibly fatally—whether they got it right or wrong. Philosophy doesn’t resolve things, it compiles positions and arguments. And if the debate about zombies is still considered open, then I’m sorry, but as Jeffreyssai says: Too slow! It would be one matter if I could just look up the standard answer and find that, lo and behold, it is correct. But philosophy, which hasn’t come to conclusions and moved on from cognitive reductions that I regard as relatively simple, doesn’t seem very likely to build complex correct structures of conclusions.

Sorry—but philosophy, even the better grade of modern analytic philosophy, doesn’t seem to end up commensurate with what I need, except by accident or by extraordinary competence. Parfit comes to mind; and I haven’t read much Dennett, but Dennett does seem to be trying to do the same sort of thing that I try to do; and of course there’s Gary Drescher. If there was a repository of philosophical work along those lines—not concerned with defending basic ideas like anti-zombieism, but with accepting those basic ideas and moving on to challenge more difficult quests of naturalism and cognitive reductionism—then that, I might well be interested in reading. But I don’t know who, besides a few heroes, would be able to compile such a repository—who else would see a modal logic as an obvious bounce-off-the-mystery.