whenever it’s been possible to make definite progress on ancient philosophical problems, such progress has almost always involved a [kind of] “bait-and-switch.” In other words: one replaces an unanswerable philosophical riddle Q by a “merely” scientific or mathematical question Q′, which captures part of what people have wanted to know when they’ve asked Q. Then, with luck, one solves Q′.
Of course, even if Q′ is solved, centuries later philosophers might still be debating the exact relation between Q and Q′! And further exploration might lead to other scientific or mathematical questions — Q′′, Q′′′, and so on — which capture aspects of Q that Q′ left untouched. But from my perspective, this process of “breaking off” answerable parts of unanswerable riddles, then trying to answer those parts, is the closest thing to philosophical progress that there is.
Successful examples of this breaking-off process fill intellectual history. The use of calculus to treat infinite series, the link between mental activity and nerve impulses, natural selection, set theory and first-order logic, special relativity, Gödel’s theorem, game theory, information theory, computability and complexity theory, the Bell inequality, the theory of common knowledge, Bayesian causal networks — each of these advances addressed questions that could rightly have been called “philosophical” before the advance was made. And after each advance, there was still plenty for philosophers to debate about truth and provability and infinity, space and time and causality, probability and information and life and mind. But crucially, it seems to me that the technical advances transformed the philosophical discussion as philosophical discussion itself rarely transforms it! And therefore, if such advances don’t count as “philosophical progress,” then it’s not clear that anything should.
Appropriately for this essay, perhaps the best precedent for my bait-and-switch is the Turing Test… with legendary abruptness, Turing simply replaced the original question by a different one: “Are there imaginable digital computers which would do well in the imitation game?”...
...The claim is not that the new question, about the imitation game, is identical to the original question about machine intelligence. The claim, rather, is that the new question is a worthy candidate for what we should have asked or meant to have asked, if our goal was to learn something new rather than endlessly debating definitions. [Luke adds: I’m reminded of Dennett’s quip that “Philosophy… is what you have to do until you figure out what questions you should have been asking in the first place.”] In math and science, the process of revising one’s original question is often the core of a research project, with the actual answering of the revised question being the relatively easy part!
A good replacement question Q′ should satisfy two properties:
(a) Q′ should capture some aspect of the original question Q — so that an answer to Q′ would be hard to ignore in any subsequent discussion of Q.
(b) Q′ should be precise enough that one can see what it would mean to make progress on Q′: what experiments one would need to do, what theorems one would need to prove, etc.
The Turing Test, I think, captured people’s imaginations precisely because it succeeded so well at (a) and (b). Let me put it this way: if a digital computer were built that aced the imitation game, then it’s hard to see what more science could possibly say in support of machine intelligence being possible. Conversely, if digital computers were proved unable to win the imitation game, then it’s hard to see what more science could say in support of machine intelligence not being possible. Either way, though, we’re no longer “slashing air,” trying to pin down the true meanings of words like “machine” and “think”: we’ve hit the relatively-solid ground of a science and engineering problem. Now if we want to go further we need to dig (that is, do research in cognitive science, machine learning, etc). This digging might take centuries of backbreaking work; we have no idea if we’ll ever reach the bottom. But at least it’s something humans know how to do and have done before. Just as important, diggers (unlike air-slashers) tend to uncover countless treasures besides the ones they were looking for.
whenever it’s been possible to make definite progress on ancient philosophical problems, such progress has almost
always involved a [kind of] “bait-and-switch.” In other words: one replaces an unanswerable philosophical riddle Q by a
“merely” scientific or mathematical question Q′, which captures part of what people have wanted to know when they’ve
asked Q. Then, with luck, one solves Q′.
Yes, this is what modern causal inference did (I suppose by taking Hume’s counterfactual definition of causation, and various people’s efforts to deal with confounding/incompatability in data analysis as starting points).
In particular:
Yes, this is what modern causal inference did (I suppose by taking Hume’s counterfactual definition of causation, and various people’s efforts to deal with confounding/incompatability in data analysis as starting points).