Thinking as the Crow Flies: Part 1 - Introduction


I’ve wanted to write a se­ries of posts here on logic and the foun­da­tions of math­e­mat­ics for a while now. There’s been some re­cent dis­cus­sion about the on­tol­ogy of num­bers and the ex­is­tence of math­e­mat­i­cal en­tities, so this seems as good a time as any to start.

Many of the dis­cussed philo­soph­i­cal prob­lems, as far as I can tell, stem from the as­sump­tion of for­mal­ism. That is, many peo­ple seem to think that math­e­mat­ics is, at some level, a for­mal logic, or at least that the ac­tivity of math­e­mat­ics has to be founded on some for­mal logic, es­pe­cially a clas­si­cal one. Beyond that be­ing an un­ten­able po­si­tion since Gödel’s In­com­plete­ness The­o­rems, it also doesn’t make a whole lot of in­tu­itive sense since math­e­mat­ics was clearly done be­fore the in­ven­tion of for­mal logic. By aban­don­ing this as­sump­tion, and tak­ing a more con­struc­tivist ap­proach, we get a much clearer view of math­e­mat­ics and logic as a whole.

This first post is mostly in­for­mal philoso­phiz­ing, at­tempt­ing to de­scribe ex­actly what logic and math­e­mat­ics is about. My sec­ond post will be a more tech­ni­cal dis­cus­sion ac­count­ing for the ba­sic no­tions of logic.

In­tu­itions and Sensations

To be­gin, I’d like to point out a fact which most would find ob­vi­ous but has, in the past, lead to difficult philo­soph­i­cal prob­lems. It is clear that we don’t have di­rect ac­cess to the real world. In­stead, we have senses which feed in­for­ma­tion, even if dishon­estly, to our mind. Th­ese senses may be pre­dictable, may be po­ten­tially mod­eled by a pat­tern which mimics our stream of senses. At some level, we have di­rect ac­cess to a sen­sory sig­nal. This sig­nal is not a pure, un­filtered lens on the world, but it is a sig­nal, in­de­pen­dent of, but di­rectly ac­cessible by, us.

We also have ac­cess to our in­tu­itions, the part of our thoughts which we may la­bel “ideas”. We may not have to­tal ac­cess to all our fac­ul­ties. Much of our men­tal pro­cess­ing is done out­side the view of our aware­ness. If I asked you to name a ran­dom city, even­tu­ally you’d come up with one. You’d, how­ever, be hard-pressed to pro­duce a trace of how that city’s name came to your aware­ness. Per­haps you could offer back­ground in­for­ma­tion, ex­plain­ing why you’d name that city, among all the pos­si­bil­ities. Re­gard­less of such ac­counts, we’d still lack a trace of the sig­nal sent by your con­scious­ness (“I want the name of a city, any city”) reach­ing into the part of your mind ca­pa­ble of fulfilling such a re­quest, and the sub­se­quent re­cep­tion of the name into your aware­ness. We don’t have such de­tailed ac­cess to the in­ner-work­ings of our mind.

It seems that those things which we have di­rect ac­cess to are, in fact, part of us. Those in­tu­itions within our aware­ness, those filtered sig­nals which we di­rectly ex­pe­rience, make up our qualia, are in­stan­ti­ated in the sub­strate of our con­scious­ness. They may be thought of as part of our­selves, and to say we have ac­cess to them is to say that we have di­rect ac­cess to those parts of our­selves of which we are aware. This, I think, is triv­ially true. Though that isn’t es­sen­tial for the rest of this piece.

We may dis­t­in­guish nor­mal in­tu­itions from senses by the de­gree we can con­trol them. In­tu­itions are con­trol­lable and ma­nipu­la­ble by our­selves, while senses are not. This isn’t perfectly clean. One may, for ex­am­ple through small DMT doses, cause one to ex­pe­rience con­trol­lable hal­lu­ci­na­tions which are a man­i­fes­ta­tion of di­rect (though not com­plete) con­trol of the senses. Also, there are plenty of ex­am­ples of in­tu­itions which we find difficult to con­trol, such as ear-worms. For the sake of this work, I will ig­nore such cases. What I want to fo­cus on are sen­sory sen­sa­tions fed to our aware­ness pas­sively and those in­tu­itions which we have com­plete (or com­plete for prac­ti­cal pur­poses) con­trol. Th­ese are the sorts of things needed for logic, math­e­mat­ics, and sci­ence, which will be the pri­mary fo­cuses of this se­ries. For the re­main­der, by “sense” and “sen­sory data” I am refer­ring to those qualia which are ex­pe­rienced pas­sively, with­out de­liber­ate con­trol; by “in­tu­ition” I am refer­ring to those in­tu­itions which are un­der our di­rect and (at least ap­par­ent) to­tal con­trol.


At this stage, it’s use­ful to make a re­mark about lan­guage and ground­ing. Con­sider what I might be say­ing if I de­scribe some­thing as an elephant. Within my mind is an in­tu­ition which I’m as­sign­ing the word “elephant”, and in call­ing some­thing pre­sumed ex­ter­nal to me an elephant, I am as­sert­ing that my in­tu­ition is an ap­prox­i­mate model for the thing I’m nam­ing. The differ­ence be­tween the in­tu­ition and the real thing is im­por­tant. It is prac­ti­cally im­pos­si­ble to have a perfect un­der­stand­ing of real-world en­tities. My in­tu­ition tied to “elephant” does not con­tain all that I might con­sider know­able about elephants, but only those things which I do know. A vet­eri­nar­ian spe­cial­iz­ing in elephants would cer­tainly have a more ac­cu­rate, more elab­o­rate in­tu­ition as­signed to “elephant” than a non-spe­cial­ist, and this wouldn’t be the full ex­tent to which elephants could be mod­eled. In essence, I’m us­ing this mod­el­ing in­tu­ition as a metaphor for an elephant when­ever I use that word.

Based on this, we can ac­count for learn­ing and dis­agree­ment. Learn­ing can be char­ac­ter­ized as the pro­cess of re­fin­ing an ap­prox­i­mately cor­rect in­tu­ition mod­el­ing some­thing ex­ter­nal. A dis­agree­ment stems from two main places. Firstly, two peo­ple with similar sen­sa­tions may be us­ing differ­ing mod­els. From these differ­ences, two peo­ple may de­scribe iden­ti­cal sen­sa­tions differ­ently, as their mod­els might dis­agree. Se­condly, two peo­ple may think they’re get­ting similar sen­sa­tions when they are not, and so dis­agree be­cause they are un­able to cor­rectly com­pare mod­els, to be­gin with. This is the “Blind men and an elephant” sce­nario.

This ac­count also cleanly ex­plains why we can still mean­ingfully talk about elephants when none are pre­sent. In that case, we are speak­ing of the in­tu­ition as­signed to “elephant”. Ad­di­tion­ally, we can talk about non-ex­is­tent en­tities like uni­corns un­prob­le­mat­i­cally, as such things would still have re­al­ities as in­tu­itions. An as­ser­tion of ex­is­tence or nonex­is­tence is re­ally about an in­tu­ition, a model of some­thing. The prop­erty of ex­is­tence cor­re­sponds to a pre­dic­tion of pres­ence in the real world by our model, non-ex­is­tence to our model pre­dict­ing ab­sence. The cor­rect­ness of these prop­er­ties is pre­cisely the de­gree to which they ac­cu­rately pre­dict sen­sory data.

In­tu­itions need not be de­signed to model some­thing in or­der for it to be used to model some­thing else. If I try to de­scribe an an­i­mal which I’m only the first time en­coun­ter­ing, then I may con­struct a new model of it by piec­ing to­gether parts of older mod­els. I may even call it an “elephant-like-thing” if I feel it has some, if limited, pre­dic­tive power. In this way, I’m con­struct­ing a new model by char­ac­ter­iz­ing the de­gree to which other mod­els pre­dict prop­er­ties of the new an­i­mal I’m see­ing. Even­tu­ally, I may as­sign this new model a word, or bor­row a word from some­one else.

One can also cre­ate in­tu­itions with­out at­tempt­ing to model some­thing ex­ter­nal. If you were a mind in a void, with­out any sen­sory in­for­ma­tion, you should still be able to think of ba­sic math­e­mat­i­cal and log­i­cal con­cepts, such as num­bers. You might not be mo­ti­vated to do so, but the abil­ity to do so is what’s rele­vant here. Th­ese con­cepts can be un­der­stood in to­tal­ity as in­tu­itions, com­pletely defin­able with­out ex­ter­nal refer­ents. Later, this will be elab­o­rated on at length, but take this para­graph as-is for the mo­ment.

Even if an in­tu­ition was cre­ated with­out in­tent to model, it still can be used as such. For ex­am­ple, one can think of “2” with­out us­ing it to model any­thing. One can still say that a herd of elephants has 2 mem­bers, us­ing the in­tu­ition of 2 as a metaphor for some as­pect of the herd.

Some no­tion I’ve heard be­fore is that it seems like a herd with 2 mem­bers would have 2 mem­bers even if there was no one around to think so, and so 2 has to ex­ist in­de­pen­dently of a mind. Un­der my ac­count, this state­ment fails to un­der­stand per­spec­tive. It is cer­tainly the case that one could model a herd of 2 us­ing 2, re­gard­less of if any­one else was think­ing of the herd. How­ever, even ask­ing about the herd pre­sup­poses that at least the asker is think­ing about the herd, dis­prov­ing the premise that the herd isn’t be­ing thought about. If it were truly the case that no one was think­ing of it at all, then there’s noth­ing to talk about. The ques­tion would not have been asked in the first place, and the ap­par­ent prob­lem then van­ishes. It is clear at this point that stat­ing “a herd has 2 mem­bers” does not make 2 part of our model of the world.

At this point, I will in­tro­duce ter­minol­ogy which dis­t­in­guishes be­tween the two kinds of in­tu­itions dis­cussed. In­tu­itions which are po­ten­tially in­com­plete, de­signed to model ex­ter­nal en­tities will be called grounded in­tu­itions. Those in­tu­itions which may be com­plete and may ex­ist with­out mod­el­ing prop­er­ties will sim­ply be un­grounded in­tu­itions.

One com­mon de­scrip­tion of re­al­ity stem­ming from Pla­ton­ism is that of an im­perfect shadow or re­flec­tion of the tran­scen­den­tal world of ideals. After all, cir­cles are perfect, but noth­ing in the world de­scribed as a cir­cle is a truly perfect cir­cle. By my ac­count, perfec­tion doesn’t come into the pic­ture. A cir­cle is an un­grounded in­tu­ition. An ex­ter­nal en­tity is only ac­cu­rately called a cir­cle in so far as the in­tu­ition of a cir­cle ac­cu­rately mod­els the en­tity’s phys­i­cal form. The en­tity isn’t im­perfect, in some ob­jec­tive sense. Rather, the grounded in­tu­ition of that en­tity is sim­ply more com­plex than the un­grounded in­tu­ition of the cir­cle. The ap­par­ent im­perfec­tion of the world is only a man­i­fes­ta­tion of its com­plex­ity. Grounded in­tu­itions tend to be more com­pli­cated than the un­grounded in­tu­itions which we used to ap­prox­i­mate the real world. This is, at once, not sur­pris­ing, but sig­nifi­cant. If we lived in an ex­tremely sim­ple world (or one which was sim­ple rel­a­tive to our minds) then we might cre­ate un­grounded in­tu­itions which were sim­pler than the av­er­age un­grounded one. We may then have trou­ble dis­t­in­guish­ing be­tween sen­sory data and in­tu­ition, as all facts about the real world would be com­pletely ob­vi­ous and in­tu­itively pre­dictable.

On­tolog­i­cal Com­mit­ments of Un­grounded Entities

I think it’s worth tak­ing the time to dis­cuss some con­tent re­lated to on­tolog­i­cal com­mit­ments and con­ven­tions. On­tolog­i­cal com­mit­ments were in­tro­duced by Quine, but I won’t hold true to the no­tion as he origi­nally de­scribed it. In­stead, by an on­tolog­i­cal com­mit­ment, I am refer­ring to an as­ser­tion of the ob­jec­tive ex­is­tence of an en­tity which is in­de­pen­dent of the sub­jec­tive ex­pe­rience of the per­son mak­ing the as­ser­tion.

Let’s take a sce­nario where two peo­ple are ar­gu­ing over what color the blood of a uni­corn is. One says silver, the other red. Our goal is to make sense of this ar­gu­ment. As­sum­ing nei­ther peo­ple be­lieve uni­corns ex­ist, what con­tent does this ar­gu­ment ac­tu­ally have?

First, it be­hooves us to make sense of what a uni­corn is, and what com­mit­ments we make in talk­ing about them. For the mo­ment, I’ll stick to a con­ven­tional dis­tribu­tive-se­man­ti­cal char­ac­ter­i­za­tion of mean­ing (I plan on mak­ing a post about this quite some time from now). Through our ex­pe­rience, we even­tu­ally as­so­ci­ate words like “blood”, “horse”, and “horn” with vec­tors in­side of some se­man­tic space. We can then com­bine them in a sen­si­cal way to pro­duce the idea of a horse with a horn, a new vec­tor for a new idea, a uni­corn. When talk­ing about com­mit­ments, we need to make a dis­tinc­tion be­tween two things; com­mit­ments to ex­pec­ta­tions, and com­mit­ments to ideas. When we define uni­corns in this man­ner, we are com­mit­ting our­selves to the idea of uni­corns as some­thing that’s co­her­ent and leg­ible. We are not mak­ing a com­mit­ment to uni­corns ex­ist­ing for real, that is we do not sud­denly ex­pect to see a uni­corn in real life. This may be con­sid­ered an on­tolog­i­cal com­mit­ment of a sort. We cer­tainly as­cribe ex­is­tence to the idea of a uni­corn, at least within our own mind. We don’t, how­ever, on­tolog­i­cally com­mit our­selves to what the idea of uni­corns might the­o­ret­i­cally model. Since all sen­tences can­not help but re­fer to ideas rather than ac­tual en­tities, re­gard­less of our ex­pec­ta­tions, the as­ser­tion that uni­corn blood is silver per­tains to this idea of uni­corns, noth­ing that ex­ists out­side of our mind.

I’d like to digress mo­men­tar­ily to talk about this stan­dard co­nun­drum:

If a tree falls in a for­est and no one is around to hear it, does it make a sound?

This ques­tion has a stan­dard solu­tion that I’d con­sider uni­ver­sally satis­fac­tory. Ul­ti­mately, the ques­tion isn’t about re­al­ity, it’s about the defi­ni­tion of the word “sound”. If by “sound” the asker is speak­ing of a sen­sa­tion in the ear, then the an­swer is “no”. If they mean vibra­tions in the air, then the an­swer is “yes”. Un­der the dis­tri­bu­tional se­man­tics of the word “sound”, we can talk about this word hav­ing val­ues in var­i­ous di­rec­tions. For some peo­ple, “sound” is as­signed the re­gion defined by a pos­i­tive value in the di­rec­tion cor­re­spond­ing to sen­sa­tions in the ear. For oth­ers, “sound” is as­signed to the re­gion with pos­i­tive value in the di­rec­tion cor­re­spond­ing to vibra­tions in the air. Th­ese two re­gions have heavy over­lap in prac­tice. When we ex­pe­rience a sen­sa­tion, it’s rare for it to have a pos­i­tive value in one of these, but not the other. And so, we as­sign one of these re­gions the word “sound”, most of the time hav­ing no prob­lem with oth­ers who make a differ­ent choice but ar­riv­ing at dis­agree­ments over ques­tions like the above.

But which is it? What does “sound” ac­tu­ally mean? Well, that’s a choice. Con­sider the situ­a­tion in de­tail. Is there any­thing that needs to be clar­ified? Are there vibra­tions in the air? Yes. Are there any sen­sa­tions in an ear caused by these vibra­tions? No. So there’s noth­ing left to learn. All that’s left is to de­cide how to de­scribe re­al­ity. It may even be use­ful to split the term, to talk about “type-1 sound” and “type-2 sound”, which usu­ally co­in­cide, but don’t on rare oc­ca­sions. Re­gard­less, it’s a mat­ter of con­ven­tion, not a mat­ter of fact, whether the word “sound” should ap­ply.

And so, we’re in sight of the re­s­olu­tion to the uni­corn blood ar­gu­ment. One per­son has a re­gion in their se­man­tic space cor­re­spond­ing to one-horned horses with silver blood, and want’s to as­sign that re­gion the word “uni­corn”. The other per­son has iden­ti­fied a close-by se­man­tic re­gion, but there the blood is red, and they want that to have the word “uni­corn”. Note that nei­ther would think that the oth­ers claim is non­sense. The ar­gu­ment is not pred­i­cated on, for ex­am­ple, one per­son think­ing the idea of a uni­corn with red blood is in­co­her­ent. Both par­ties agree that each other have iden­ti­fied mean­ingful re­gions of se­man­tic space. They are mak­ing iden­ti­cal on­tolog­i­cal com­mit­ments. What they are dis­agree­ing on is a nam­ing con­ven­tion.

Through­out this se­ries, I will of­ten dis­cuss math­e­mat­ics and logic as fun­da­men­tally sub­jec­tive ac­tivi­ties, but this does not mean I re­ject math­e­mat­i­cal ob­jec­tivism as such. Rather, the ob­jec­tive char­ac­ter of math­e­mat­ics moves from be­ing an as­pect of math­e­mat­ics it­self to be­ing an as­pect of how it’s prac­ticed. Math­e­mat­ics is done as a so­cial ac­tivity car­ried by a con­ven­tion which is it­self ob­jec­tive: or at least (ideally) as ob­jec­tive as a ruler. Show­ing that some­one is math­e­mat­i­cally wrong largely boils down to show­ing which con­ven­tion a per­son is break­ing in mak­ing an in­cor­rect judg­ment.

Brouwer, who was the first to re­ally push math­e­mat­i­cal in­tu­ition­ism, de­scribed math­e­mat­ics as a so­cial ac­tivity at its core. As a con­se­quence, he ar­gued against the idea of a for­mal log­i­cal foun­da­tion be­fore Gödel’s in­com­plete­ness the­o­rems were even dis­cov­ered.

The ba­sic idea of con­struc­tivism is to limit our on­tolog­i­cal com­mit­ments as much as pos­si­ble. Con­sider the well known “I think, there­fore I am”. It high­lights the fact that the act of think­ing and in­tro­spec­tion it­self im­plies an on­tolog­i­cal com­mit­ment to the self. Since we are already do­ing those things, it’s re­ally not much of a com­mit­ment at all. Similarly, the fact that I am writ­ing in a lan­guage com­mits me on­tolog­i­cally to the ex­is­tence of the lan­guage I’m writ­ing in. As I’m do­ing this any­way, it’s not much of a com­mit­ment. For this, I call these sorts of com­mit­ments “cheap com­mit­ments”.

Math­e­mat­i­cal and log­i­cal en­tities are ideas. By dis­cussing them, we are com­mit­ting our­selves to the ex­is­tence of these en­tities at least as ideas. For ex­am­ple, if I say “there ex­ists an even nat­u­ral num­ber”, I am com­mit­ting my­self to the ideas of nat­u­ral num­bers and even­ness. I’m also com­mit­ting my­self to the co­her­ence or sound­ness of these ideas, that the state­ment in ques­tion is mean­ingful mod­ulo the se­man­tics of the ideas used.

I can eas­ily make gram­mat­i­cal-look­ing sen­tences that seem to make some sort of ex­pen­sive com­mit­ment. For ex­am­ple, I could say that g’gle­mors ex­ist and that a h’plop is an ex­am­ple of a g’gle­mor on ac­count of hipl’xtheth. If I said those things with any sort of se­ri­ous­ness I’d be com­mit­ting my­self to the ex­is­tence of those men­tioned things at least as ideas, as well as the sound­ness of those ideas. Be­ing non­sense words not rep­re­sent­ing any­thing at all, I’d ob­vi­ously be mis­guided in mak­ing such com­mit­ments, they cer­tainly aren’t cheap.

The point of a con­struc­tivist ac­count is to de­scribe math­e­mat­i­cal and log­i­cal ideas in such a way that one is com­mit­ted to their sound­ness in a cheap way. And here we can start to see the sig­nifi­cance of char­ac­ter­iz­ing math­e­mat­ics and logic as be­ing about un­grounded en­tities. In or­der for my com­mit­ments to those ideas to be cheap, they must be to­tally char­ac­ter­ized by some­thing that comes from within me, by some­thing that I’m do­ing any­way when dis­cussing those ideas.

Precom­mit­ments and Judgments

We say that an idea is a cheap com­mit­ment if, in defin­ing the no­tion, we sum­mon the en­tity be­ing defined, or perform the ac­tivity which we are judg­ing to be the case. In or­der to do this, we need to pay at­ten­tion to pre­com­mit­ments.

A pre­com­mit­ment is a pre­scrip­tion we make of our own be­hav­ior. It’s an ac­tivity which is be­ing done so long as those pre­scrip­tions are be­ing fol­lowed. Precom­mit­ments are the core of struc­tured think­ing. When­ever we im­pose any pat­tern or con­sis­tency to our think­ing, we are mak­ing a pre­com­mit­ment. By an­a­lyz­ing our pre­com­mit­ments closely, we can con­struct, ex­plic­itly, ideas which are cheap on­tolog­i­cal com­mit­ments. If we are ac­tively do­ing a pre­com­mit­ment, then we can cheaply ac­knowl­edge the ex­is­tence of the idea con­jured by this pre­com­mit­ment.

Many un­grounded in­tu­itions arise as a form of mean­ing-as-us­age. Some words don’t have mean­ing be­yond the pre­cise way they are used. If you take a word like “elephant”, it’s mean­ing is con­tin­gent on ex­ter­nal in­for­ma­tion which may change over time. A word like “and”, how­ever, isn’t. As a re­sult, we’d say “and”’s mean­ing fun­da­men­tally boils down to how it’s used, and noth­ing more. Go­ing be­yond that, if we are to fo­cus on un­grounded in­tu­itions which are com­plete and com­pre­hen­si­ble, then we are fo­cus­ing pre­cisely on those un­grounded in­tu­itions who’s defi­ni­tion is pre­cisely a speci­fi­ca­tion of us­age, and noth­ing more. That speci­fi­ca­tion of us­age is our pre­com­mit­ment. Of course, us­age hap­pens out­side the mind, but the rules dic­tat­ing that us­age aren’t, and its those canon­i­cal rules of us­age which I mean by “defi­ni­tion”.

The ba­sic el­e­ments of defi­ni­tions are judg­ments. Judg­ments in­clude things like judg­ing that some­thing is a propo­si­tion, or is a pro­gram, or is some other syn­tac­tic con­struc­tion. Judg­ments also in­clude as­ser­tions of truth, false­hood, pos­si­bil­ity, val­idity, etc of some data. How­ever, be aware that a judg­ment sim­ply con­sists of a pat­tern of men­tal to­kens which we may de­clare. Re­gard­less of what pre­con­cep­tions about pos­si­bil­ity, truth, etc. one has, these should be over­writ­ten by the com­pleted mean­ing ex­pla­na­tion in or­der to be un­der­stood as a purely un­grounded in­tu­ition and a cheap com­mit­ment.

When we make a judg­ment, we are merely as­sert­ing that we may use that pat­tern in our rea­son­ing. Precom­mit­ments, as we will make use of them here, are a col­lec­tion of judg­ments. As a con­se­quence, what we are pre­com­mit­ting our­selves to is an al­lowance of us­age for cer­tain pat­terns of men­tal to­kens when rea­son­ing about a con­cept. The full pre­com­mit­ment sum­mon­ing some con­cept will be called the mean­ing ex­pla­na­tion for that con­cept.

Ul­ti­mately, it is ei­ther the case that we make a par­tic­u­lar judg­ment or we don’t. That, how­ever, is a fact about our own be­hav­ior, not about the na­ture of re­al­ity in to­tal, in essence. Fur­ther­more, some­one not mak­ing a par­tic­u­lar judg­ment is not au­to­mat­i­cally mak­ing the op­po­site, or negated, judg­ment. In fact, such a thing doesn’t even make sense in gen­eral. As a re­sult, we don’t re­pro­duce clas­si­cal logic. Though, as we’ll even­tu­ally see, there are con­struc­tive log­ics which are clas­si­cal. How­ever, it’s worth dis­pel­ling the idea that there’s “one true logic”. Ques­tions about which kind of log­i­cal sym­bols, clas­si­cal, in­tu­ition­is­tic, lin­ear, etc. is the “true” one are non­sense. One is only cor­rect rel­a­tive to some prob­lem which has an el­e­ment which is to be mod­eled by one of these. Whichever is the more ac­cu­rate model is the cor­rect one, there is no “one true logic”, and it’s cer­tainly not the case that the in­tu­itions which make up math­e­mat­ics are gov­erned by a clas­si­cal logic. For ex­am­ple, the ex­is­tence of the­o­ret­i­cally un­solv­able prob­lems (e.g. the halt­ing prob­lem) illus­trates that our ca­pac­ity for judg­ing truth is fun­da­men­tally con­strained, not by some ob­jec­tive tran­scen­den­tal stan­dard for truth, but rather by our abil­ity to make proofs.

To sum­ma­rize, to define a con­cept we give a list of judg­ments, rules dic­tat­ing which pat­terns of to­kens we can use when con­sid­er­ing the con­cept. So long as these rules are be­ing fol­lowed, the con­cept ex­ists as a co­her­ent idea. If the pre­com­mit­ment is vi­o­lated, for ex­am­ple by mak­ing a judg­ment about the con­cept which is not pre­scribed by the rules, then the con­cept, as defined by the origi­nal pre­com­mit­ment, no longer ex­ists. There may be a new pre­com­mit­ment that defines a differ­ent con­cept us­ing the same to­kens which is not vi­o­lated, but that, be­ing a differ­ent pre­com­mit­ment, con­sti­tutes a differ­ent mean­ing ex­pla­na­tion, and so its sum­moned con­cept does not have the same mean­ing. So long as I fol­low a pre­com­mit­ment defin­ing a con­cept, it is hyp­o­crit­i­cal of me to deny the co­her­ence of that con­cept, just as it would be hyp­o­crit­i­cal to deny my lan­guage as I speak, to deny my ex­is­tence so long as I live.

Com­pu­ta­tion to Canon­i­cal Form

We are now free to ex­plore an ex­am­ple of the con­struc­tion of an un­grounded in­tu­ition. I should be spe­cific and point out that not all un­grounded in­tu­itions are un­der dis­cus­sion. For the sake of math­e­mat­ics and logic, in­tu­itions must be com­pletely com­pre­hen­si­ble. Un­like grounded in­tu­itions, an un­grounded one may be such that it’s never mod­ified by new in­for­ma­tion. This doesn’t de­scribe all un­grounded in­tu­itions, but it de­scribes the ones we’re in­ter­ested in.

One of the most im­por­tant judg­ments we will con­sider is of the form . It is a kind of com­pu­ta­tional judg­ment. It’s worth ex­plain­ing why com­pu­ta­tion is con­sid­ered be­fore any­thing else in math­e­mat­ics. To digress a bit, it’s easy to ar­gue that some no­tion of com­pu­ta­tion is nec­es­sary for do­ing even the most ba­sic as­pects of or­di­nary math­e­mat­ics. Con­sider, for ex­am­ple, the stan­dard the­o­rem; for all propo­si­tions and , . The uni­ver­sal quan­tifi­ca­tion al­lows us to perform a sub­sti­tu­tion, get­ting, for ex­am­ple, , as an in­stance.

We should med­i­tate on sub­sti­tu­tion, an es­sen­tial re­quire­ment of even the most ba­sic and an­cient as­pects of logic. Sub­sti­tu­tion is an al­gorithm, a com­pu­ta­tion which must be performed some­how. In or­der to re­al­ize , we must be do­ing the ac­tivity cor­re­spond­ing to the sub­sti­tu­tion of with and the ac­tion cor­re­spond­ing to the sub­sti­tu­tion of with at some point. Sub­sti­tu­tion will ap­pear over and over again in var­i­ous guises, act­ing as a cen­tral and pow­er­ful no­tion of com­pu­ta­tion. To em­pha­size, once sub­sti­tu­tion is available, we are of the way to­ward com­plete and fully gen­eral Tur­ing-Com­plete com­pu­ta­tion via the lambda calcu­lus. Much of the miss­ing fea­tures per­tain to ex­plicit vari­able bind­ing, which we need any­way in or­der to use the quan­tifiers of first-or­der logic. I don’t think it’s re­ally de­bat­able that com­pu­ta­tion on­tolog­i­cally pre­cedes logic. One can do logic as an ac­tivity, and much of that ac­tivity is com­pu­ta­tional in na­ture.

Be­fore ex­posit­ing on some ex­am­ple judg­ments, we should ad­dress the need for iso­lat­ing con­cepts. Con­sider a the­ory with nat­u­ral num­bers and prod­ucts . We must ask what con­sti­tutes a nat­u­ral num­ber and a product. By de­fault, we can form a nat­u­ral num­ber as ei­ther zero or the suc­ces­sor of a nat­u­ral num­ber. e.g. , , , , … A product can be formed via where is an and is a . Ad­di­tion­ally, we have that, if is a nat­u­ral num­ber then (where is a pro­jec­tion func­tion) is a nat­u­ral num­ber, and if is a nat­u­ral num­ber then is a nat­u­ral num­ber, and if is a nat­u­ral num­ber then is a nat­u­ral num­ber, etc. to in­finity. This situ­a­tion gets branch­ingly more com­plex as we add new con­cepts to our the­ory. If we don’t define con­cepts as fun­da­men­tally iso­lated from each other, we in­hibit the ex­ten­si­bil­ity of our logic. This is both un­prag­matic and un­re­al­is­tic, as we will want to ex­tend the breadth of con­cepts we can deal with as we model more novel things. Fur­ther­more, the co­her­ence of the con­cept of a nat­u­ral num­ber should not de­pend on the co­her­ence of the no­tion of a product. Ul­ti­mately, each con­cept should be defined by some pre­com­mit­ment con­sist­ing of a list of rules for mak­ing judg­ments. If we en­ter­tain this in­finite regress, then there may be no way in gen­eral to state what the pre­com­mit­ment in ques­tion even is.

At the core of our defi­ni­tions will be canon­i­cal forms. Every time we define a new con­cept, we will as­sert what its canon­i­cal forms are. For ex­am­ple, in defin­ing the nat­u­ral num­bers we will judge that and that, as­sum­ing , we can con­clude that . We can’t as­sume this alone, how­ever. Con­sider, for ex­am­ple , which should be a nat­u­ral num­ber, but isn’t in the cor­rect form. We now have an op­por­tu­nity to ex­plain . in­di­cates that we start out with some men­tal in­stan­ti­a­tion , and af­ter some men­tal at­ten­tion, it be­comes the in­stan­ti­a­tion . So we have, for ex­am­ple . When I say , I do not mean that is equal to . That’s a sep­a­rate kind of judg­ment. This means our full judg­ment is that iff or for some . There are some de­tails miss­ing from this defi­ni­tion, but it should serve as a guid­ing ex­am­ple, the first rough sketch of what I mean by a mean­ing ex­pla­na­tion.

It is worth di­gress­ing some­what to cri­tique the ax­io­matic method. Most peo­ple, es­pe­cially when first learn­ing of a sub­ject, will ex­pe­rience a math­e­mat­i­cal or log­i­cal con­cept as a grounded in­tu­ition. This is re­flected in a per­son’s an­swer to ques­tions such as “why is ad­di­tion com­mu­ta­tive?”. Most peo­ple could not an­swer. It is not part of the defi­ni­tion of ad­di­tion or num­bers for this prop­erty to hold. Rather, this is a prop­erty stem­ming from more so­phis­ti­cated rea­son­ing in­volv­ing math­e­mat­i­cal in­duc­tion. A per­son can, none the less, feel an un­der­stand­ing of math­e­mat­i­cal con­cepts and an ac­cep­tance of prop­er­ties of them with­out knowl­edge of their un­der­ly­ing defi­ni­tions. Ax­io­matic meth­ods, such as the ax­ioms of ZFC, don’t ac­tu­ally define what they are about. In­stead, they list prop­er­ties that their topic must satisfy.

The no­tion of ZFC-set, in some sense, is grounded by an un­der­stand­ing of the ax­ioms, though it is still tech­ni­cally an un­grounded in­tu­ition. This state of af­fairs holds for any ax­io­matic sys­tem. There is some­thing fun­da­men­tally un­grounded about a for­mal logic, but it’s not the con­cepts which the ax­ioms de­scribe. Rather, what we have in a for­mal logic is a mean­ing ex­pla­na­tion for the logic it­self. That is, the ax­ioms of the logic tell us pre­cisely what con­sti­tutes a proof in the logic. In this way, we may for­mu­late a mean­ing ex­pla­na­tion for any for­mal logic, con­sist­ing of judg­ments for each ax­iom and rule of in­fer­ence. Con­se­quently, we can cheaply com­mit our­selves to the co­her­ence of the logic as an idea. What we can’t cheaply com­mit our­selves to are the ideas ex­pressed within the logic. After all, a for­mal logic could be in­con­sis­tent, it’s ideas may be in­co­her­ent.

As a con­se­quence, the no­tion of a co­her­ent idea of ZFC-set can­not be com­mit­ted to cheaply. This holds similarly for any con­cept de­scribed purely in terms of ax­ioms. It might be made cheap by ap­peal­ing to a suffi­cient mean­ing ex­pla­na­tion, but with­out ad­di­tional effort, things treated purely ax­io­mat­i­cally lack proper defi­ni­tions in the sense used here.