Logic: the science of algorithm evaluating algorithms

“Math­e­mat­i­cal logic is the sci­ence of al­gorithm eval­u­at­ing al­gorithms.”

Do you think that this is an overly gen­er­al­iz­ing, far fetched propo­si­tion or an al­most triv­ial state­ment? Wait, don’t cast your vote be­fore the end of this short es­say!

It is hard to dis­pute that logic is the sci­ence of draw­ing cor­rect con­clu­sions. It stud­ies the­o­ret­i­cally falsifi­able rules the lead to deriva­tions which are ver­ifi­able in a finite amount of me­chan­i­cal steps, even by ma­chines.

Let’s dig a bit deeper by start­ing to fo­cus­ing on the “draw­ing cor­rect con­clu­sions” part, first. It im­plies the logic deals both with ab­stract rules: “draw­ing” and their mean­ing: “con­clu­sions”.

Logic is not just about mind­less fol­low­ing of cer­tain rules (that’s alge­bra :P) its con­clu­sions must have truth val­ues that re­fer to some “model”. Take for ex­am­ple De Mor­gan’s law:

not (a and b) = (not a) or (not b).

It can be checked for each four pos­si­ble sub­sti­tu­tions of boolean val­ues: a = false, b = false; a = false, b = true; …. If we agreed upon the stan­dard mean­ing of the log­i­cal not, or and and op­er­a­tors, then we must con­clude that the De Mor­gan’s rule is perfect. On the other hand: the similar look­ing rule

not (a and b) = (not a) and (not b)

can be eas­ily re­futed by eval­u­at­ing for the coun­terex­am­ple a = false, b = true.

Gen­er­ally: in any use­ful math­e­mat­i­cal sys­tem, log­i­cal con­clu­sions should work in some in­ter­est­ing model.

How­ever, in gen­eral, to­tal ver­ifi­a­bil­ity is way too much to ask. As Karl Pop­per pointed out: of­ten one must be satis­fied with falsifi­a­bil­ity of sci­en­tific state­ments as a crite­rion. For ex­am­ple, the fol­low­ing log­i­cal rule

not (for each x: F(x)) ⇔ ex­ists x : not F(x)

is im­pos­si­ble to check for ev­ery for­mula F. Not di­rectly check­able state­ments in­clude all those where the set of all pos­si­ble sub­sti­tu­tions is (po­ten­tially) in­finite.

This ob­ser­va­tion could be for­mal­ized by say­ing that a map­ping from ab­stract to con­crete is re­quired. This think­ing can be made pre­cise by for­mal­iz­ing fur­ther: lo­gi­ci­ans study the con­nec­tion be­tween ax­iom sys­tems and their mod­els.

But wait a minute: is not there some­thing fishy here? How could the pro­cess of for­mal­iza­tion be for­mal­ized? Is not this so kind of cir­cu­lar rea­son­ing? In fact, it is deeply cir­cu­lar on differ­ent lev­els. The most pop­u­lar way of deal­ing with this Gor­dian knot is sim­ply by cut­ting it us­ing some kind of naive set the­ory in which the top­most level of ar­gu­ments are con­cluded.

This may be good enough for ed­u­ca­tional pur­poses, but if one in the fol­low­ing ques­tions should always be asked: What is the ba­sis of those top level rules? Could there be any mis­take there? Falsifi­a­bil­ity always im­plies an (at least the­o­ret­i­cal) pos­si­bil­ity of our rules be­ing wrong even at the top­most level. Does us­ing a meta-level set the­ory mean that there is some un­ques­tion­able rule we have to ac­cept as God given, at least there?

For­tu­nately, the falsifi­a­bil­ity of ax­ioms has an­other im­pli­ca­tion: it re­quires only a sim­ple dis­crete and finite pro­cess to re­fute them: an ax­iom or rule is ei­ther falsified or not. Check­ing coun­terex­am­ples is like ex­per­i­men­tal physics: any vi­o­la­tion must be ob­serv­able and re­pro­d­u­ca­ble. There are no fuzzy, con­tin­u­ous mea­sure­ments, here. There are only dis­crete ma­nipu­la­tions. If no mis­takes were made and some coun­terex­am­ple is found, then one of the in­volved log­i­cal rules or ax­ioms had to be wrong.

Let’s squint our eyes a bit and look the at whole topic from a differ­ent per­spec­tive: In tra­di­tional view, ax­iom sys­tems are con­sid­ered to be sets of rules that al­low for draw­ing con­clu­sions. This can also be rephrased as: Ax­iom sys­tems can be cast into pro­grams that take chains of ar­gu­ments as pa­ram­e­ter and test them for cor­rect­ness.

This seems good enough for the the for­mal rules, but what about the se­man­tics (their mean­ing)?

In or­der to define the se­man­tics, there need to be map to some­thing else for­mally check­able, ruled by sym­bol­ics, which is just in­for­ma­tion pro­cess­ing, again. Fol­low­ing that path, we end up with with the an­swer: A log­i­cal sys­tem is a pro­gram that checks that cer­tain log­i­cal state­ments hold for the be­hav­ior of an­other pro­gram (model).

This is just the first sim­plifi­ca­tion and we will see how the no­tions of “check”, “log­i­cal state­ment”, and “holds” can also be dropped and re­placed by some­thing more generic and nat­u­ral, but first let’s get con­crete and let us look at the two most ba­sic ex­am­ples:

  1. Log­i­cal For­mu­las: The model is the set of all log­i­cal for­mu­las given in terms of bi­nary and, or and the not func­tion. The ax­iom sys­tem con­sists of a few log­i­cal rules like the com­mu­ta­tivity, as­so­ci­a­tivity and dis­tribu­tivity of and and or, the De Mor­gan laws as well as not rule not (not a)=a. (The ex­act choice of the ax­iom sys­tem is some­what ar­bi­trary and is not re­ally im­por­tant here.) This tra­di­tional de­scrip­tion can be turned into: The model is a pro­gram that takes a boolean for­mu­las as in­put and eval­u­ates them on given (in­put) sub­sti­tu­tions. The ax­iom sys­tem can be turned as a pro­gram that given a chain of deriva­tions of equal­ity of boolean for­mu­las checks that each step some rewrit­ten in terms of one of the pre­de­ter­mined ax­ioms, “prov­ing” the equal­ity of the for­mu­las at the be­gin­ning and end of the con­clu­sion chain. Note that given two sup­pos­edly equal boolean for­mu­las (“equal­ity proven us­ing the ax­ioms”), a straight­for­ward loop around the model could check that those for­mu­las are re­ally equiv­a­lent and there­fore our an­ti­ci­pated se­man­tic re­la­tion­ship be­tween the ax­iom sys­tem and its model is clearly falsifi­able.

  2. Nat­u­ral num­bers: Our model is the set of all ar­ith­metic ex­pres­sions us­ing +, *, - on nat­u­ral num­bers, pred­i­cates us­ing < and = on ar­ith­metic ex­pres­sions and any log­i­cal com­bi­na­tion of pred­i­cates. For the ax­iom sys­tem, we can choose the set of Peano ax­ioms. Again: We can turn the model into a pro­gram by eval­u­at­ing any valid for­mula in the model. The ax­iom sys­tem can again be turned into a pro­gram that checks the cor­rect­ness of logic chains of deriva­tions. Although we can not check ver­ify the cor­rect­ness of ev­ery Peano for­mula in the model by sub­sti­tut­ing each pos­si­ble value, we still can have an in­finite loop where we could ar­rive at ev­ery sub­sti­tu­tion within a finite amount of steps. That is: falsifi­a­bil­ity still holds.

The above two ex­am­ples can be eas­ily gen­er­al­ized to say­ing that: “A log­i­cal sys­tem is a pro­gram that checks that cer­tain kinds of log­i­cal state­ments can be de­rived for the be­hav­ior of an­other pro­gram (model).”

Let us sim­plify this a bit fur­ther. We can eas­ily re­place the check­ing part al­to­gether by notic­ing that given a state­ment, the ax­iom sys­tem checker pro­gram can loop over all pos­si­ble chains of deriva­tions for the state­ment and its nega­tion. If that pro­gram stops then the log­i­cal cor­rect­ness of the state­ment (or its nega­tion) was es­tab­lished, oth­er­wise it is can nei­ther be proven nor re­futed by those ax­ioms. (That is: it was in­de­pen­dent of those ax­ioms.)

There­fore, we end up just say­ing: “A log­i­cal sys­tem is pro­gram that cor­rectly eval­u­ates whether a cer­tain log­i­cal state­ment holds for the be­hav­ior of an­other pro­gram, (when­ever the eval­u­a­tor pro­gram halts.)”

Un­for­tu­nately, we still have the rel­a­tively fuzzy “log­i­cal state­ment” term in our de­scrip­tion. Is this nec­es­sary?

In fact, quan­tifiers in log­i­cal state­ments can be eas­ily re­placed by loops around the eval­u­at­ing pro­gram that check for the cor­re­spond­ing sub­sti­tu­tions. Func­tions and re­la­tions can be re­solved similarly. So we can ex­tend the model pro­gram from a sim­ply sub­sti­tu­tion method to one search­ing for some solu­tion by adding suit­able loops around it. The main prob­lem is that those loops may be in­finite. Still, they always loop over a countable set. When­ever there is a match­ing sub­sti­tu­tion, the search pro­gram will find it. We have at least falsifi­a­bil­ity, again. For ex­am­ple, the state­ment of Fer­mat’s Last The­o­rem is equiv­a­lent to the state­ment that pro­gram the searches for its solu­tion never stops.

In short: the state­ment “log­i­cal state­ment S holds for a pro­gram P” can always be re­placed by ei­ther “pro­gram P’ stops” or “pro­gram P’ does not stop” (where P’ is a suit­able pro­gram us­ing P as sub­rou­tine, de­pend­ing on the log­i­cal state­ment). That is we fi­nally ar­rive at our origi­nal state­ment:

“Math­e­mat­i­cal logic is the sci­ence of al­gorithm eval­u­at­ing al­gorithms [with the pur­pose mak­ing pre­dic­tions on their (stop­ping) be­hav­ior.]”

Sim­ple enough, isn’t it? But can this be ar­gued back­ward? Can the stop­ping prob­lem always be re-cast as a model the­o­retic prob­lem on some model? In fact, it can. Logic is pow­er­ful and the se­man­tics of the the work­ing of a pro­grams is eas­ily ax­io­m­a­tized. There re­ally is a rel­a­tively straight­for­ward one-to-one cor­re­spon­dence be­tween model the­ory and al­gorithms tak­ing the pro­grams as ar­gu­ments to pre­dict their (stop­ping) be­hav­ior.

Still, what can be gained any­thing by hav­ing such an al­gorith­mic view?

First of all: it has a re­mark­able sym­me­try not ex­plic­itly ap­par­ent by the tra­di­tional view point: It is much less im­por­tant which pro­gram is the model and which is the “pre­dic­tor”. Pre­dic­tion goes both ways: the roles of the pro­grams are mostly in­ter­change­able. The dis­tinc­tion be­tween con­crete and ab­stract van­ishes.

Another point is the con­cep­tual sim­plic­ity: the need for a as­sum­ing a meta-sys­tem van­ishes. We treat the al­gorith­mic be­hav­ior as the sin­gle source of ev­ery­thing and look for sym­met­ric cor­re­la­tions be­tween the be­hav­ior of pro­grams in­stead of pos­tu­lat­ing higher and higher lev­els of meta-the­o­ries.

Also, the al­gorith­mic view has quite a bit of sim­plify­ing power due to its gen­er­al­ity:

Tur­ing’s halt­ing the­o­rem is con­cep­tu­ally very sim­ple. (Seems al­most too sim­ple to be in­ter­est­ing.) Goedel’s the­o­rem, on the other hand, looks more tech­ni­cal and in­volved. Still, by the above cor­re­spon­dence, Tur­ing’s halt­ing the­o­rem is ba­si­cally just a more gen­eral ver­sion Goedel’s the­o­rem. By the cor­re­spon­dence be­tween the al­gorith­mic and log­i­cal view, Tur­ing’s the­o­rem can be trans­lated to: ev­ery generic enough ax­iom sys­tem (cor­re­spond­ing to a Tur­ing com­plete lan­guage) has at least one un­de­cid­able state­ment (in­put pro­gram, for which the check­ing pro­gram does not stop.) The only tech­ni­cally in­volved part of Goedel’s the­o­rem is to check that its cor­re­spond­ing pro­gram is Tur­ing com­plete. How­ever, hav­ing the right goal in mind, it is not hard to check at all.