Some limitations of reductionism about epistemology

Link post

This post is largely based on a lightning talk I gave at a Genesis event on metacognition, with some editing to clarify and expand on the arguments.

Reductionism is the strategy of breaking things down into smaller pieces, then trying to understand those smaller pieces and how they fit together into larger pieces. It’s been an excellent strategy for physics, for most of science, for most of human knowledge. But my claim is that, when the thing we’re trying to understand is how to think, being overly reductionist has often led people astray, particularly in academic epistemology.

I’ll give three examples of what goes wrong when you try to be reductionist about epistemology. Firstly, we often think of knowledge in terms of sentences or propositions with definite truth-values—for example, “my car is parked on the street outside”. Philosophers have debated extensively what it means to know that such a claim is true; I think the best answer is the bayesian one, where we assign credences to propositions based on our evidence. Let’s say I have 90% credence that my car is parked on the street outside, based on leaving it there earlier—and let’s assume it is in fact still there. Then whether we count this as “knowledge” or not is mainly a question about what threshold we should use for the definition of “knows” (one which will probably change significantly depending on the context).

But although bayesianism makes the notion of knowledge less binary, it still relies too much on a binary notion of truth and falsehood. To elaborate, let’s focus on philosophy of science for a bit. Could someone give me a probability estimate that Darwin’s theory of evolution is true? [Audience answer: 97%] Okay, but what if I told you that Darwin didn’t know anything about genetics, or the actual mechanisms by which traits are passed down? So I think that 97% points in the right direction, but I think it’s less that the theory has a 97% chance of being totally true, and more like a 97% chance of being something like 97% true. If you break down all the things Darwin said into a list of propositions: animals inherit from their parents, and 100 different things—almost certainly at least one of these is false. That doesn’t change the fact that overall, the theory is very close to true (even though we really have no idea how to measure or quantify that closeness).

I don’t think this is a particularly controversial or novel claim. But it’s surprising that standard accounts of bayesianism don’t even try to account for approximate truth. And I think that’s because people have often been very reductionist in trying to understand knowledge by looking at the simplest individual cases, of single propositions with few ambiguities or edge cases. By contrast, when you start looking into philosophy of science, and how theories like Newtonian gravity can be very powerful and accurate approximations to an underlying truth that looks very different, the notion of binary truthhood and falsehood becomes much less relevant.

Second example: Hume’s problem of induction. Say you’re playing billiards, and you hit a ball towards another ball. You expect them to bounce off each other. But how do you know that they won’t pass straight through each other, or both shoot through the roof? The standard answer: we’ve seen this happen many times before, and we expect that things will stay roughly the same. But Hume says that this falls short of a deductive argument, it’s just an extrapolation. Since then, philosophers have debated the problem extensively. But they’ve done so in a reductionist way which focuses on the wrong things. The question of whether an individual ball will bounce off another ball is actually a question about our whole systems of knowledge: I believe the balls will bounce off each other because I believe they’re made out of atoms, and I have some beliefs about how atoms repel each other. I believe the balls won’t shoot through the roof due to my beliefs about gravity. If we try to imagine the balls not bouncing off each other, you have to imagine a whole revolution in our scientific understanding.

Now, Hume could raise the same objection in response: why can’t we imagine that physics has a special exception in this one case, or maybe that the fundamental constants fluctuate over time? If you push the skepticism that far, I don’t think we have any bulletproof response to it—but that’s true for basically all types of skepticism. Yet, nevertheless, thinking about doing induction in relation to models of the wider world, rather than individual regularities, is a significant step forward. For example, it clears up Nelson Goodman’s confusion about his New Riddle of Induction. Broadly speaking, the New Riddle asks: why shouldn’t we do induction on weird “gerrymandered” concepts instead of our standard ones? For any individual concept, that’s hard to answer—but when you start to think in a more systematic way, it becomes clearer that trying to create a model of the world in terms of gerrymandered concepts is hugely complex.

Third example: in the history of AI, one of the big problems that people have faced is the problem of symbol grounding: what does it mean for one representation in my AI to correspond to the real world. What does it mean for an AI to have a concept of a car—what makes the internal variable in my AI map to cars in the real world? Another example comes from neuroscience—you may have heard of Jennifer Aniston neurons, which fire when they recognise a single person, across a range of modalities. How does this symbolic representation in your brain relate to the real world?

The history of AI is the history of people trying to solve this from the ground up. Start with a few concepts, add some more to them, branch out, do a search through them, etc. This research program, known as symbolic AI, failed pretty badly. And we can see why when we think more holistically. The reason that a neuron in my brain represents my grandmother has nothing to do with that neuron itself, it’s because it’s connected to my arms which make me reach out and hug her when I see her, and the speech centers in my brain which remind me of her name when I talk about her, and the rest of my brain which brings up memories when I think of her. These aren’t things you can figure out by looking at the individual case, nor is it something you can design into the system on a step by step basis, as AI researchers used to try to do.

So these are three cases where, I claim, people have been reductionist about epistemology when they should instead have taken a much more systems-focused approach.