Philosophy of Numbers (part 2)
As it turns out, I asked my leading questions in precisely the reverse order I’d like to answer them in. I’ll start with a simple picture of how we evaluate the truth of mathematical statements, then defend that this makes sense in terms of how we understand “truth,” and only last mention existence.
Back to the comparison between “There exists a city larger than Paris” and “There exists a number greater than 17.” When we evaluate the statement about Paris we check our map of the world, find that Paris doesn’t seem extremely big, and maybe think of some larger cities.
We can use exactly the same thought process on the statement about 17: check our map, quickly recognize that 17 isn’t very big, and maybe think of some bigger numbers or the stored principle that there is no largest integer. A large chunk of our issue now collapses into the question “Why does the map containing 17 seem so similar to the map containing Paris?”
We use the metaphor of map and territory a lot, but let’s take a moment to delve a little deeper. My “map” is really more like a huge collection of names, images, memories, scents, impressions, etcetera, all associated with each other in a big web. When I see the word “Paris” I can very quickly figure out how strongly that thing is associated with “city size,” and by thinking about “city size” I can tell you some city names that seem more closely-associated with that than “Paris.”
“17” is a little trickier, because to explain how I can have associations with “17″ in my big web of association, I also need to explain why I don’t need a planet-sized brain to hold my impressions of all possible numbers you could have shown me.
The answer is that there’s not really a separate token in my head for “17,” and not for “Paris” either. My brain doesn’t keep a discrete label for everything, instead it stores and manipulates mental representations that are the collective pattern of lots of neurons, and therefore inhabit some high-dimensional space. For example, 17 and 18 might have mental representations that are close together in representation-space. And I can easily represent 87438 despite never having thought about that number before, because I can map the symbols to the right point in representation-space.
If we really do evaluate mathematical statements the same way we evaluate statements about our map of the external world, then that would explain why both evaluations seem to return the same type of “true” or “false.” It’s also convenient for evaluating the truth of mixed mathematical and empirical statements like “The number of pens on my table is less than 3 factorial.” But we still need to fit this apparent-truth of mathematical statements with our conception of truth as a correspondence between map and territory.
An important fact about our models of the world is that they’re capable of modeling things that aren’t real. Suppose our world contains a red ball. We might hypothesize many different world-models and variations on models, each with a different past and future trajectory for the red ball. Psychologically, this feels like we are imagining different possible worlds, at most one of which can be real.
To make a statement like “The ball is in the box” is to imply that we are in one specific fraction of the possible worlds. This statement is false in some possible worlds and true in others, but we should only endorse that the ball is in the box if, in our one true world, the ball is actually in the box.
Each statement about the red ball that we can evaluate as true or false can be thought of as defining a set of the possible worlds where that statement is true. “The volume of the ball contains a neutrino” is true in almost every world, while “The ball is in a volcano” is true in almost none. Knowing true statements gives us helps us narrow down which possible world we’re actually in.
<Digression> More technically, knowing true statements helps us pick models that predict the world well. All this talk of possible worlds is a convenient metaphor. </Digression>
Moving closer to the point: “The ball has bounced a prime number of times” also defines a perfectly valid set of possible worlds. So. Does “3 is a prime number” define a set of possible worlds?
If we were really committed to answering “no” to this, we would have to undergo strange contortions, like being able to evaluate “The ball has bounced three times and the ball has bounced a prime number of times,” but not “The ball has bounced three times and three is a prime number.” Being able to compare the empirical with the abstract suggests the ability to compare the abstract with the abstract.
If we answer “yes,” the set of possible worlds where 3 is a prime number seems like “all of them.” (Or perhaps only almost all of them.) Math is then a bunch of tautologies.
But this raises an important problem: if mathematical truths are tautologous, then that would seem to render having a mental map of mathematics unnecessary—you can just evaluate statements purely on whether they obey their axioms. Conversely, if mathematical statements are always true or always false, then they’re not useful, because learning them doesn’t refine our predictions of the world. To resolve this apparent problem, we’ll need a very powerful force: human ignorance.
Even though mathematical statements are theoretically evaluable from a small set of axioms, in practice that is much, much too hard for humans to do at runtime. Instead, we have to build up our knowledge of math slowly, associate important results with each other and with their real-world applications, and be able to place new knowledge in context of the old.
So it is precisely human badness at math that makes us keep a mental map of mathematics that’s structured like our map of the world. The fact that our map doesn’t start completely filled in also means that we can learn new things about math. It also leads directly into my last leading question from part one: why might we think numbers exist?
The reasons to feel like numbers exist are pretty similar to the reasons to feel like the physical world exists. For starters, our observations don’t always turn out how we’d predict. The stuff that generates the predictions, we call belief, and the stuff that generates the observations, we call reality.
Sometimes, you have beliefs about mathematical statements even if you can’t prove them. You might think, say, P!=NP, not by reasoning from the axioms, but by reasoning from the shape of your map. And when this heuristic reasoning fails, as it occasionally does, it feels like you’re encountering an external reality, even if there’s no causal thing that could be providing the feedback.
We also feel more like things exist when we model them as objective, rather than subjective. When we use our model of the world to imagine changing peoples’ opinions about an objective thing, our model says that the objective thing doesn’t change. Mathematical truths fulfil this property nicely—details left to the reader.
Lastly, things that we think exist have relationships with other elements in our map of the world. Things are associated with properties, like color and size—numbers definitely have properties. And although numbers are not connected to rocks in a causal model of the world, it seems like we say “2+2=4” because 2+2=4. But the “because” back there is not a causal relationship—rather it’s an association our brain makes that’s something like logical implication.
So maybe I do understand those mysterious links in LDT (artist’s representation above) better than I did before. They’re a toy-model representation of a connection that seems very natural in our brains, between different things that we have in the same map of the world.
I played a bit coy in this post—I talk a big game about understanding numbers, but here we are at the end and rather than telling you whether numbers really exist or not, I’ve just harped on what makes people feel like things exist.
To give away the game completely: I avoided the question because whether numbers “really exist” can end up getting stuck in the center node of the classic blegg/rube classifier. When faced with a red egg, the solution is usually not to figure out if it’s “really a blegg or a rube.” The solution is to be able to think about it as a red egg. And the even better solution is to understand the function of sorting these objects so that we can use categorizations in contexts where it’s useful.
Understanding why we feel the way we do about numbers is really an exercise in looking at the surrounding nodes. The core claim of this article is that two things that normally agree—“should be a basic object in a parsimonous causal model of the world” and “can usefully be thought about using certain expectations and habits developed for physical objects”—diverge here, and so we should strive to replace tension about whether numbers “really exist” with understanding of how we think about numbers.
My aim was for a standard LW-ian view of numbers. I feel like I learned a lot writing this, and hopefully some of that feeling rubs off on the reader. (Thank you for reading, by the way.) I’ll be back with something completely different next week.