# Philosophy of Numbers (part 2)

A post in a se­ries of things I think would be fun to dis­cuss on LW. Part one is here.

I

As it turns out, I asked my lead­ing ques­tions in pre­cisely the re­verse or­der I’d like to an­swer them in. I’ll start with a sim­ple pic­ture of how we eval­u­ate the truth of math­e­mat­i­cal state­ments, then defend that this makes sense in terms of how we un­der­stand “truth,” and only last men­tion ex­is­tence.

Back to the com­par­i­son be­tween “There ex­ists a city larger than Paris” and “There ex­ists a num­ber greater than 17.” When we eval­u­ate the state­ment about Paris we check our map of the world, find that Paris doesn’t seem ex­tremely big, and maybe think of some larger cities.

We can use ex­actly the same thought pro­cess on the state­ment about 17: check our map, quickly rec­og­nize that 17 isn’t very big, and maybe think of some big­ger num­bers or the stored prin­ci­ple that there is no largest in­te­ger. A large chunk of our is­sue now col­lapses into the ques­tion “Why does the map con­tain­ing 17 seem so similar to the map con­tain­ing Paris?”

<Di­gres­sion>

We use the metaphor of map and ter­ri­tory a lot, but let’s take a mo­ment to delve a lit­tle deeper. My “map” is re­ally more like a huge col­lec­tion of names, images, mem­o­ries, scents, im­pres­sions, etcetera, all as­so­ci­ated with each other in a big web. When I see the word “Paris” I can very quickly figure out how strongly that thing is as­so­ci­ated with “city size,” and by think­ing about “city size” I can tell you some city names that seem more closely-as­so­ci­ated with that than “Paris.”
“17” is a lit­tle trick­ier, be­cause to ex­plain how I can have as­so­ci­a­tions with “17″ in my big web of as­so­ci­a­tion, I also need to ex­plain why I don’t need a planet-sized brain to hold my im­pres­sions of all pos­si­ble num­bers you could have shown me.
The an­swer is that there’s not re­ally a sep­a­rate to­ken in my head for “17,” and not for “Paris” ei­ther. My brain doesn’t keep a dis­crete la­bel for ev­ery­thing, in­stead it stores and ma­nipu­lates men­tal rep­re­sen­ta­tions that are the col­lec­tive pat­tern of lots of neu­rons, and there­fore in­habit some high-di­men­sional space. For ex­am­ple, 17 and 18 might have men­tal rep­re­sen­ta­tions that are close to­gether in rep­re­sen­ta­tion-space. And I can eas­ily rep­re­sent 87438 de­spite never hav­ing thought about that num­ber be­fore, be­cause I can map the sym­bols to the right point in rep­re­sen­ta­tion-space.

</​Di­gres­sion>

If we re­ally do eval­u­ate math­e­mat­i­cal state­ments the same way we eval­u­ate state­ments about our map of the ex­ter­nal world, then that would ex­plain why both eval­u­a­tions seem to re­turn the same type of “true” or “false.” It’s also con­ve­nient for eval­u­at­ing the truth of mixed math­e­mat­i­cal and em­piri­cal state­ments like “The num­ber of pens on my table is less than 3 fac­to­rial.” But we still need to fit this ap­par­ent-truth of math­e­mat­i­cal state­ments with our con­cep­tion of truth as a cor­re­spon­dence be­tween map and ter­ri­tory.

II

An im­por­tant fact about our mod­els of the world is that they’re ca­pa­ble of mod­el­ing things that aren’t real. Sup­pose our world con­tains a red ball. We might hy­poth­e­size many differ­ent world-mod­els and vari­a­tions on mod­els, each with a differ­ent past and fu­ture tra­jec­tory for the red ball. Psy­cholog­i­cally, this feels like we are imag­in­ing differ­ent pos­si­ble wor­lds, at most one of which can be real.

To make a state­ment like “The ball is in the box” is to im­ply that we are in one spe­cific frac­tion of the pos­si­ble wor­lds. This state­ment is false in some pos­si­ble wor­lds and true in oth­ers, but we should only en­dorse that the ball is in the box if, in our one true world, the ball is ac­tu­ally in the box.

Each state­ment about the red ball that we can eval­u­ate as true or false can be thought of as defin­ing a set of the pos­si­ble wor­lds where that state­ment is true. “The vol­ume of the ball con­tains a neu­trino” is true in al­most ev­ery world, while “The ball is in a vol­cano” is true in al­most none. Know­ing true state­ments gives us helps us nar­row down which pos­si­ble world we’re ac­tu­ally in.

<Di­gres­sion> More tech­ni­cally, know­ing true state­ments helps us pick mod­els that pre­dict the world well. All this talk of pos­si­ble wor­lds is a con­ve­nient metaphor. </​Di­gres­sion>

Mov­ing closer to the point: “The ball has bounced a prime num­ber of times” also defines a perfectly valid set of pos­si­ble wor­lds. So. Does “3 is a prime num­ber” define a set of pos­si­ble wor­lds?

If we were re­ally com­mit­ted to an­swer­ing “no” to this, we would have to un­dergo strange con­tor­tions, like be­ing able to eval­u­ate “The ball has bounced three times and the ball has bounced a prime num­ber of times,” but not “The ball has bounced three times and three is a prime num­ber.” Be­ing able to com­pare the em­piri­cal with the ab­stract sug­gests the abil­ity to com­pare the ab­stract with the ab­stract.

If we an­swer “yes,” the set of pos­si­ble wor­lds where 3 is a prime num­ber seems like “all of them.” (Or per­haps only al­most all of them.) Math is then a bunch of tau­tolo­gies.

But this raises an im­por­tant prob­lem: if math­e­mat­i­cal truths are tau­tolo­gous, then that would seem to ren­der hav­ing a men­tal map of math­e­mat­ics un­nec­es­sary—you can just eval­u­ate state­ments purely on whether they obey their ax­ioms. Con­versely, if math­e­mat­i­cal state­ments are always true or always false, then they’re not use­ful, be­cause learn­ing them doesn’t re­fine our pre­dic­tions of the world. To re­solve this ap­par­ent prob­lem, we’ll need a very pow­er­ful force: hu­man ig­no­rance.

Even though math­e­mat­i­cal state­ments are the­o­ret­i­cally evaluable from a small set of ax­ioms, in prac­tice that is much, much too hard for hu­mans to do at run­time. In­stead, we have to build up our knowl­edge of math slowly, as­so­ci­ate im­por­tant re­sults with each other and with their real-world ap­pli­ca­tions, and be able to place new knowl­edge in con­text of the old.

So it is pre­cisely hu­man bad­ness at math that makes us keep a men­tal map of math­e­mat­ics that’s struc­tured like our map of the world. The fact that our map doesn’t start com­pletely filled in also means that we can learn new things about math. It also leads di­rectly into my last lead­ing ques­tion from part one: why might we think num­bers ex­ist?

III

The rea­sons to feel like num­bers ex­ist are pretty similar to the rea­sons to feel like the phys­i­cal world ex­ists. For starters, our ob­ser­va­tions don’t always turn out how we’d pre­dict. The stuff that gen­er­ates the pre­dic­tions, we call be­lief, and the stuff that gen­er­ates the ob­ser­va­tions, we call re­al­ity.

Some­times, you have be­liefs about math­e­mat­i­cal state­ments even if you can’t prove them. You might think, say, P!=NP, not by rea­son­ing from the ax­ioms, but by rea­son­ing from the shape of your map. And when this heuris­tic rea­son­ing fails, as it oc­ca­sion­ally does, it feels like you’re en­coun­ter­ing an ex­ter­nal re­al­ity, even if there’s no causal thing that could be pro­vid­ing the feed­back.

We also feel more like things ex­ist when we model them as ob­jec­tive, rather than sub­jec­tive. When we use our model of the world to imag­ine chang­ing peo­ples’ opinions about an ob­jec­tive thing, our model says that the ob­jec­tive thing doesn’t change. Math­e­mat­i­cal truths fulfil this prop­erty nicely—de­tails left to the reader.

Lastly, things that we think ex­ist have re­la­tion­ships with other el­e­ments in our map of the world. Things are as­so­ci­ated with prop­er­ties, like color and size—num­bers definitely have prop­er­ties. And al­though num­bers are not con­nected to rocks in a causal model of the world, it seems like we say “2+2=4” be­cause 2+2=4. But the “be­cause” back there is not a causal re­la­tion­ship—rather it’s an as­so­ci­a­tion our brain makes that’s some­thing like log­i­cal im­pli­ca­tion.

So maybe I do un­der­stand those mys­te­ri­ous links in LDT (artist’s rep­re­sen­ta­tion above) bet­ter than I did be­fore. They’re a toy-model rep­re­sen­ta­tion of a con­nec­tion that seems very nat­u­ral in our brains, be­tween differ­ent things that we have in the same map of the world.

Epilogue

I played a bit coy in this post—I talk a big game about un­der­stand­ing num­bers, but here we are at the end and rather than tel­ling you whether num­bers re­ally ex­ist or not, I’ve just harped on what makes peo­ple feel like things ex­ist.

To give away the game com­pletely: I avoided the ques­tion be­cause whether num­bers “re­ally ex­ist” can end up get­ting stuck in the cen­ter node of the clas­sic blegg/​rube clas­sifier. When faced with a red egg, the solu­tion is usu­ally not to figure out if it’s “re­ally a blegg or a rube.” The solu­tion is to be able to think about it as a red egg. And the even bet­ter solu­tion is to un­der­stand the func­tion of sort­ing these ob­jects so that we can use cat­e­go­riza­tions in con­texts where it’s use­ful.

Un­der­stand­ing why we feel the way we do about num­bers is re­ally an ex­er­cise in look­ing at the sur­round­ing nodes. The core claim of this ar­ti­cle is that two things that nor­mally agree—“should be a ba­sic ob­ject in a par­si­monous causal model of the world” and “can use­fully be thought about us­ing cer­tain ex­pec­ta­tions and habits de­vel­oped for phys­i­cal ob­jects”—di­verge here, and so we should strive to re­place ten­sion about whether num­bers “re­ally ex­ist” with un­der­stand­ing of how we think about num­bers.

My aim was for a stan­dard LW-ian view of num­bers. I feel like I learned a lot writ­ing this, and hope­fully some of that feel­ing rubs off on the reader. (Thank you for read­ing, by the way.) I’ll be back with some­thing com­pletely differ­ent next week.

• I just wanted to nit­pick on one point: it’s not true that all math­e­mat­i­cal state­ments are the­o­ret­i­cally evaluable from a small set of ax­ioms. That’s the point of Gödel’s the­o­rem. Maybe what you meant to say is that the truth-val­ues of all math­e­mat­i­cal state­ments are de­ter­mined once you fix the ax­ioms? This is closer to be­ing cor­rect, but still not quite right. The right way to say it is that the truth-value of a math­e­mat­i­cal state­ment is de­ter­mined once you fix the in­ter­pre­ta­tion of the state­ment with suffi­cient pre­ci­sion. The ax­ioms of e.g. Peano ar­ith­metic can be sug­ges­tive of a cer­tain in­ter­pre­ta­tion of ad­di­tion, mul­ti­pli­ca­tion, and the class of nat­u­ral num­bers, but in fact the in­ter­pre­ta­tion re­sides in our minds and not in the ax­ioms.

Of course, your main point that even if the truth-value of a math­e­mat­i­cal state­ment has been de­ter­mined, it doesn’t mean that we know what its truth-value is, is still cor­rect.

• Good points. I’m not sure that there is a sense in which the Gödel sen­tence is true that doesn’t rely on hu­man rea­son­ing (or an analogue thereof) filling in the gaps in a very similar way to how we fill in the gaps for P!=NP. Even though P!=NP is prob­a­bly sim­ple ig­no­rance, while for Gödel we know there are mod­els of the ax­ioms with both truth val­ues. But you’re definitely right that say­ing “you could just eval­u­ate all math­e­mat­i­cal sen­tences” sweeps some im­por­tant stuff un­der the rug.

• By the way, you can ac­tu­ally make these into a se­quence by go­ing to https://​​www.lesser­wrong.com/​​library and click­ing on the “New Se­quence” but­ton next to “Com­mu­nity Se­quences” (the se­quence cre­ation UI is still some­what janky, but it should work).

• Thanks! The cre­ation works great. Only is­sue was drop­ping the se­quence nav­i­ga­tion thingie (se­quence name and for­ward and back but­tons) upon edit­ing the post, fixed by re­mov­ing and re-adding the post to the se­quence.

• Do uni­corns ex­ist? It seems to me that your ar­gu­ments are fully gen­eral. You can, in fact, make true state­ments about uni­corns (“ev­ery uni­corn has a horn”) and per­haps some of them might not even seem triv­ial. It’s just that num­bers are more pre­cise, so we can make more claims about them, and more con­cise, so we can as­sume that my num­bers and your num­bers are the same.

• You might note that I made no ar­gu­ment that num­bers ex­ist :) The ar­gu­ments in the bit on ex­is­tence were all for what fac­tors I think are im­por­tant in peo­ples’ feel­ing that they ex­ist. If you take the ar­gu­ments and ap­ply them to uni­corns, what I’d hope that they ex­plain is not whether or not uni­corns ex­ist, but why peo­ple might not be­lieve uni­corns ex­ist.

• Do you see some differ­ence be­tween say­ing “num­bers ex­ist” and “I think/​feel that num­bers ex­ist”? I sure don’t.

Re­gard­ing uni­corns, how do your ar­gu­ments sup­port their non-ex­is­tence? I’m see­ing the op­po­site. I think with your ar­gu­ments ev­ery idea and con­cept could be said to ex­ist.

• The differ­ence I see be­tween the state­ments is that they sug­gest differ­ent courses of in­quiry. Sup­pose I start from the naive view of think­ing that num­bers ex­ist. If I think of this as “num­bers ex­ist,” then I’ll start ask­ing ques­tion like “where did num­bers come from?” and “what’s a good nec­es­sary and suffi­cient defi­ni­tion of num­bers?” I think these are bad ques­tions to ask and mostly get you nowhere. In fact, the bad­ness of these ques­tions is a great prag­matic ar­gu­ment for say­ing that num­bers don’t ex­ist.

But if you think of your be­lief as “I feel like num­bers ex­ist,” you might ask things more like “why do I feel like num­bers ex­ist?” which I quite like, be­cause, as I am shame­lessly copy­ing from Eliezer, this is the sort of ques­tion that gets you sen­si­ble in­for­ma­tion whether or not your naive view was cor­rect. And once you un­der­stand where your be­lief comes from, I think you ac­tu­ally end up car­ing less about whether num­bers “ex­ist” or not, be­cause once you know what prop­er­ties of num­bers are im­por­tant to you, you can let your thoughts dic­tate the word you choose to use, rather than let­ting the la­bel dic­tate your thoughts.

Any­how, the key thing from this post that doesn’t ap­ply to uni­corns is that there’s no ex­pe­rience of hav­ing sep­a­rate things cause our hy­pothe­ses and our up­dates about uni­corns. This might help ex­plain why we think it’s ob­vi­ous that uni­corns don’t ex­ist.

• If I think of this as “num­bers ex­ist,” then I’ll start ask­ing ques­tion like “where did num­bers come from?”

That’s not an ex­pe­rience I can re­late to, but ok.

And once you un­der­stand where your be­lief comes from, I think you ac­tu­ally end up car­ing less about whether num­bers “ex­ist” or not

I see where you’re com­ing from, how­ever I’m a big be­liever in the con­cept that words should mean things. If you find the word “ex­ist” too vague for your pur­poses, you should pro­pose a more pre­cise defi­ni­tion, or use a differ­ent word.

Any­how, the key thing from this post that doesn’t ap­ply to uni­corns is that there’s no ex­pe­rience of hav­ing sep­a­rate things cause our hy­pothe­ses and our up­dates about uni­corns.

I’m say­ing that there is. For now, in­stead of uni­corns, con­sider god. There is the en­tire field of the­ol­ogy fo­cused on rea­son­ing about god, cre­at­ing hy­pothe­ses about it and find­ing them wrong. But hope­fully we don’t feel that god ex­ists (or if we do feel it, that’s not thanks to the­ol­ogy). Or con­sider the Star Wars uni­verse. Like­wise there are many fans who rea­son what be­longs to this uni­verse and what does not, and where there is rea­son, there is a chance to find our hy­pothe­ses wrong. The same is true for ev­ery idea, it’s only that uni­corns are de­gen­er­ate—the rea­son­ing is too triv­ial to find your­self wrong. But if we were mo­rons, per­haps we’d find the hy­poth­e­sis “uni­corns have one horn” to be novel and profound.

• Fair points. I think that this sort of game-play­ing might con­tribute to peo­ple feel­ing like god ex­ists, but it’s definitely a bad rea­son. But in that case, per­haps we might say that god-the-con­cept ‘ex­ists’ (con­cepts and num­bers are in pretty much the same boat re: ex­is­tence) but god-the-be­ing-with-causal-effects doesn’t ex­ist, and peo­ple are try­ing to smug­gle prop­er­ties from one to the other by us­ing the same name for both.

This is sort of a re­verse of the on­tolog­i­cal ar­gu­ment.