Philosophy of Numbers (part 2)

A post in a se­ries of things I think would be fun to dis­cuss on LW. Part one is here.


I

As it turns out, I asked my lead­ing ques­tions in pre­cisely the re­verse or­der I’d like to an­swer them in. I’ll start with a sim­ple pic­ture of how we eval­u­ate the truth of math­e­mat­i­cal state­ments, then defend that this makes sense in terms of how we un­der­stand “truth,” and only last men­tion ex­is­tence.

Back to the com­par­i­son be­tween “There ex­ists a city larger than Paris” and “There ex­ists a num­ber greater than 17.” When we eval­u­ate the state­ment about Paris we check our map of the world, find that Paris doesn’t seem ex­tremely big, and maybe think of some larger cities.

We can use ex­actly the same thought pro­cess on the state­ment about 17: check our map, quickly rec­og­nize that 17 isn’t very big, and maybe think of some big­ger num­bers or the stored prin­ci­ple that there is no largest in­te­ger. A large chunk of our is­sue now col­lapses into the ques­tion “Why does the map con­tain­ing 17 seem so similar to the map con­tain­ing Paris?”

<Di­gres­sion>

We use the metaphor of map and ter­ri­tory a lot, but let’s take a mo­ment to delve a lit­tle deeper. My “map” is re­ally more like a huge col­lec­tion of names, images, mem­o­ries, scents, im­pres­sions, etcetera, all as­so­ci­ated with each other in a big web. When I see the word “Paris” I can very quickly figure out how strongly that thing is as­so­ci­ated with “city size,” and by think­ing about “city size” I can tell you some city names that seem more closely-as­so­ci­ated with that than “Paris.”
“17” is a lit­tle trick­ier, be­cause to ex­plain how I can have as­so­ci­a­tions with “17″ in my big web of as­so­ci­a­tion, I also need to ex­plain why I don’t need a planet-sized brain to hold my im­pres­sions of all pos­si­ble num­bers you could have shown me.
The an­swer is that there’s not re­ally a sep­a­rate to­ken in my head for “17,” and not for “Paris” ei­ther. My brain doesn’t keep a dis­crete la­bel for ev­ery­thing, in­stead it stores and ma­nipu­lates men­tal rep­re­sen­ta­tions that are the col­lec­tive pat­tern of lots of neu­rons, and there­fore in­habit some high-di­men­sional space. For ex­am­ple, 17 and 18 might have men­tal rep­re­sen­ta­tions that are close to­gether in rep­re­sen­ta­tion-space. And I can eas­ily rep­re­sent 87438 de­spite never hav­ing thought about that num­ber be­fore, be­cause I can map the sym­bols to the right point in rep­re­sen­ta­tion-space.

</​Di­gres­sion>

If we re­ally do eval­u­ate math­e­mat­i­cal state­ments the same way we eval­u­ate state­ments about our map of the ex­ter­nal world, then that would ex­plain why both eval­u­a­tions seem to re­turn the same type of “true” or “false.” It’s also con­ve­nient for eval­u­at­ing the truth of mixed math­e­mat­i­cal and em­piri­cal state­ments like “The num­ber of pens on my table is less than 3 fac­to­rial.” But we still need to fit this ap­par­ent-truth of math­e­mat­i­cal state­ments with our con­cep­tion of truth as a cor­re­spon­dence be­tween map and ter­ri­tory.

II

An im­por­tant fact about our mod­els of the world is that they’re ca­pa­ble of mod­el­ing things that aren’t real. Sup­pose our world con­tains a red ball. We might hy­poth­e­size many differ­ent world-mod­els and vari­a­tions on mod­els, each with a differ­ent past and fu­ture tra­jec­tory for the red ball. Psy­cholog­i­cally, this feels like we are imag­in­ing differ­ent pos­si­ble wor­lds, at most one of which can be real.

To make a state­ment like “The ball is in the box” is to im­ply that we are in one spe­cific frac­tion of the pos­si­ble wor­lds. This state­ment is false in some pos­si­ble wor­lds and true in oth­ers, but we should only en­dorse that the ball is in the box if, in our one true world, the ball is ac­tu­ally in the box.

Each state­ment about the red ball that we can eval­u­ate as true or false can be thought of as defin­ing a set of the pos­si­ble wor­lds where that state­ment is true. “The vol­ume of the ball con­tains a neu­trino” is true in al­most ev­ery world, while “The ball is in a vol­cano” is true in al­most none. Know­ing true state­ments gives us helps us nar­row down which pos­si­ble world we’re ac­tu­ally in.

<Di­gres­sion> More tech­ni­cally, know­ing true state­ments helps us pick mod­els that pre­dict the world well. All this talk of pos­si­ble wor­lds is a con­ve­nient metaphor. </​Di­gres­sion>

Mov­ing closer to the point: “The ball has bounced a prime num­ber of times” also defines a perfectly valid set of pos­si­ble wor­lds. So. Does “3 is a prime num­ber” define a set of pos­si­ble wor­lds?

If we were re­ally com­mit­ted to an­swer­ing “no” to this, we would have to un­dergo strange con­tor­tions, like be­ing able to eval­u­ate “The ball has bounced three times and the ball has bounced a prime num­ber of times,” but not “The ball has bounced three times and three is a prime num­ber.” Be­ing able to com­pare the em­piri­cal with the ab­stract sug­gests the abil­ity to com­pare the ab­stract with the ab­stract.

If we an­swer “yes,” the set of pos­si­ble wor­lds where 3 is a prime num­ber seems like “all of them.” (Or per­haps only al­most all of them.) Math is then a bunch of tau­tolo­gies.

But this raises an im­por­tant prob­lem: if math­e­mat­i­cal truths are tau­tolo­gous, then that would seem to ren­der hav­ing a men­tal map of math­e­mat­ics un­nec­es­sary—you can just eval­u­ate state­ments purely on whether they obey their ax­ioms. Con­versely, if math­e­mat­i­cal state­ments are always true or always false, then they’re not use­ful, be­cause learn­ing them doesn’t re­fine our pre­dic­tions of the world. To re­solve this ap­par­ent prob­lem, we’ll need a very pow­er­ful force: hu­man ig­no­rance.

Even though math­e­mat­i­cal state­ments are the­o­ret­i­cally evaluable from a small set of ax­ioms, in prac­tice that is much, much too hard for hu­mans to do at run­time. In­stead, we have to build up our knowl­edge of math slowly, as­so­ci­ate im­por­tant re­sults with each other and with their real-world ap­pli­ca­tions, and be able to place new knowl­edge in con­text of the old.

So it is pre­cisely hu­man bad­ness at math that makes us keep a men­tal map of math­e­mat­ics that’s struc­tured like our map of the world. The fact that our map doesn’t start com­pletely filled in also means that we can learn new things about math. It also leads di­rectly into my last lead­ing ques­tion from part one: why might we think num­bers ex­ist?

III

The rea­sons to feel like num­bers ex­ist are pretty similar to the rea­sons to feel like the phys­i­cal world ex­ists. For starters, our ob­ser­va­tions don’t always turn out how we’d pre­dict. The stuff that gen­er­ates the pre­dic­tions, we call be­lief, and the stuff that gen­er­ates the ob­ser­va­tions, we call re­al­ity.

Some­times, you have be­liefs about math­e­mat­i­cal state­ments even if you can’t prove them. You might think, say, P!=NP, not by rea­son­ing from the ax­ioms, but by rea­son­ing from the shape of your map. And when this heuris­tic rea­son­ing fails, as it oc­ca­sion­ally does, it feels like you’re en­coun­ter­ing an ex­ter­nal re­al­ity, even if there’s no causal thing that could be pro­vid­ing the feed­back.

We also feel more like things ex­ist when we model them as ob­jec­tive, rather than sub­jec­tive. When we use our model of the world to imag­ine chang­ing peo­ples’ opinions about an ob­jec­tive thing, our model says that the ob­jec­tive thing doesn’t change. Math­e­mat­i­cal truths fulfil this prop­erty nicely—de­tails left to the reader.

Lastly, things that we think ex­ist have re­la­tion­ships with other el­e­ments in our map of the world. Things are as­so­ci­ated with prop­er­ties, like color and size—num­bers definitely have prop­er­ties. And al­though num­bers are not con­nected to rocks in a causal model of the world, it seems like we say “2+2=4” be­cause 2+2=4. But the “be­cause” back there is not a causal re­la­tion­ship—rather it’s an as­so­ci­a­tion our brain makes that’s some­thing like log­i­cal im­pli­ca­tion.

So maybe I do un­der­stand those mys­te­ri­ous links in LDT (artist’s rep­re­sen­ta­tion above) bet­ter than I did be­fore. They’re a toy-model rep­re­sen­ta­tion of a con­nec­tion that seems very nat­u­ral in our brains, be­tween differ­ent things that we have in the same map of the world.


Epilogue

I played a bit coy in this post—I talk a big game about un­der­stand­ing num­bers, but here we are at the end and rather than tel­ling you whether num­bers re­ally ex­ist or not, I’ve just harped on what makes peo­ple feel like things ex­ist.

To give away the game com­pletely: I avoided the ques­tion be­cause whether num­bers “re­ally ex­ist” can end up get­ting stuck in the cen­ter node of the clas­sic blegg/​rube clas­sifier. When faced with a red egg, the solu­tion is usu­ally not to figure out if it’s “re­ally a blegg or a rube.” The solu­tion is to be able to think about it as a red egg. And the even bet­ter solu­tion is to un­der­stand the func­tion of sort­ing these ob­jects so that we can use cat­e­go­riza­tions in con­texts where it’s use­ful.

Un­der­stand­ing why we feel the way we do about num­bers is re­ally an ex­er­cise in look­ing at the sur­round­ing nodes. The core claim of this ar­ti­cle is that two things that nor­mally agree—“should be a ba­sic ob­ject in a par­si­monous causal model of the world” and “can use­fully be thought about us­ing cer­tain ex­pec­ta­tions and habits de­vel­oped for phys­i­cal ob­jects”—di­verge here, and so we should strive to re­place ten­sion about whether num­bers “re­ally ex­ist” with un­der­stand­ing of how we think about num­bers.


My aim was for a stan­dard LW-ian view of num­bers. I feel like I learned a lot writ­ing this, and hope­fully some of that feel­ing rubs off on the reader. (Thank you for read­ing, by the way.) I’ll be back with some­thing com­pletely differ­ent next week.