# Christian_Szegedy

Karma: 654
• In or­der to define the se­man­tics,… This isn’t pre­cise enough for me to agree that it’s true. Is it a claim? A new defi­ni­tion?

First: sorry for the bad gram­mar! Let me start with rephras­ing the first sen­tence a bit more clearly:

“In or­der to define se­man­tics, we need to define a map be­tween the logic to the model ….”

It is cor­rect that this de­scrip­tion con­strains se­man­tics to maps be­tween sym­bol­i­cally check­able sys­tems. Physi­cists may not agree with this view and could say: “For me, se­man­tics is a map­ping from a for­mal sys­tem to a phys­i­cal sys­tem that could be con­tin­u­ous or to which the Church the­sis does not hold.”

Model the­o­ret­i­ci­ans, how­ever, define their no­tion of se­man­tics be­tween log­i­cal sys­tems only. There­fore I think I am in a pretty good agree­ment with their nomen­cla­ture.

The mes­sage is that even if we con­sider a con­tin­u­ous sys­tem math­e­mat­i­cally (for ex­am­ple real num­ber), only those of its as­pects mat­ter which can be ver­ified by cap­tured by dis­crete in­for­ma­tion pro­cess­ing method. What I ar­gue here is that this im­plicit pro­jec­tion to the dis­crete is best made ex­plicit if view it from an al­gorith­mic point of view.

Although we can not check ver­ify the cor­rect­ness … .. What do you mean? Are you talk­ing about some­thing other pro­cess than the proof check­ing pro­gram?

It is more model check­ing: Given a state­ment like “For each x: F(x)”, since your in­put do­main is countable, you can just loop over all sub­sti­tu­tions. Although this pro­gram will not ever stop if the state­ment was true, but you can at least re­fute it in a finite num­ber of steps if the state­ment is false. This is why I con­sider falsifi­a­bil­ity im­por­tant.

I agree that there is also a culprit: this is only true for sim­ple ex­pres­sions if you have only each quan­tifiers. For ex­am­ple, but not for more com­plex log­i­cal ex­pres­sions like the twin primes con­jec­ture, which can be cast as:

fore­ach x: ex­ists y: prime(y) and prime(y+2)

Still this ex­pres­sion can be cast into the form: “T(x) halts for ev­ery in­put x.”, where T is the pro­gram that searches for the next pair of twin primes both big­ger than n.

But what about longer se­quences of quan­tifier, like

f = fore­ach a: ex­ists b: fore­ach c: ex­ists d: …. F(a,b,c,d,...)

Can this still be casted into the form “T(x) halts for ev­ery in­put x”? If it would not then we needed to add an­other layer of logic around the stop­ping of Tur­ing ma­chine which would defeat our origi­nal pur­pose.

In fact, there is a neat trick to avoid that: You can define a pro­gram T’(n) which takes a sin­gle nat­u­ral num­ber and checks the val­idity (not f) for ev­ery sub­sti­tu­tion of to­tal length < n. Then f is equiv­a­lent to the statate­ment: T’(n) does halt for ev­ery nat­u­ral num­ber n.

Which means we man­age to hide the cas­cade of quan­tifiers within al­gorithm T’. Cool, hugh?

• The over­all mes­sage is not re­ally new tech­ni­cally, but its philo­soph­i­cal im­pli­ca­tions are some­what sur­pris­ing even for math­e­mat­i­ci­ans. In gen­eral, look­ing at the same thing from differ­ent an­gles to helps to get ac­quire more thor­ough un­der­stand­ing even if it does not nec­es­sar­ily provide a clear short term benefit.

A few years ago, I chat­ted with a few very good math­e­mat­i­ci­ans who were not aware (rel­a­tively straight­for­ward) equiv­alence be­tween the the­o­rems of Tur­ing and Goedel, but could see it within a few min­utes and had no prob­lem grasp­ing the the in­her­ent, nat­u­ral con­nec­tion be­tween log­i­cal sys­tems and pro­gram eval­u­at­ing al­gorithms.

I think, that there is quite a bit of mys­ti­cism and mis­con­cep­tion re­gard­ing math­e­mat­i­cal logic, set the­ory, etc. by the gen­eral pub­lic. I think that it is valuable to put them in a con­text that clar­ifies their real, in­her­ently al­gorith­mic na­ture. It may be also helpful for clean­ing up one’s own world view.

I am sorry if my ac­count was un­clear at points and I am glad to provide clar­ifi­ca­tion to any un­clear points.

• I ex­pected the re­ac­tion with the countably in­finite mod­els, but I did not ex­pect it to be the first. ;)

I wanted to get into that in the write up, but I had to stop at some point. The ar­gu­ment is that in or­der to have sci­en­tific the­o­ries, we need to have falsifi­a­bil­ity, which means that this always nec­es­sar­ily deals with a dis­crete pro­jec­tion of the phys­i­cal world. On the other hand so far ev­ery dis­crete man­i­fes­ta­tion of phys­i­cal sys­tems seemed to be able to be mod­el­led by Tur­ing ma­chines. (This as­sump­tion is called the Church the­sis.) If you add these two, then you ar­rive by the above con­clu­sion.

Another rea­son for it not be­ing a prob­lem is that ev­ery (first or­der) ax­iom sys­tem has a countable model by the the­o­rem of Lowen­heim-Skolem. (Yes, even the the­ory of real num­bers have a countable model, which is also known as the Skolem para­dox.)

Ac­tu­ally, I don’t think that the tech­ni­cal con­tent of the write-up is novel, it is prob­a­bly some­thing that was already clear to Tur­ing, Church, Goedel and von Neu­mann in the 40-50ies. IMO, the take­away is a cer­tain, more prag­matic, way of think­ing about logic in the age in­for­ma­tion pro­cess­ing, in­stead of stick­ing to an out­dated in­tu­ition. Also the ex­plicit recog­ni­tion that the do­mains of math­e­mat­i­cal logic and AI are much more di­rectly con­nected than it would seem naively.

# Logic: the sci­ence of al­gorithm eval­u­at­ing algorithms

22 Feb 2012 18:13 UTC
6 points
• The ob­ject of life is not to be on the side of the ma­jor­ity, but to es­cape find­ing one­self in the ranks of the in­sane.

Mar­cus Aure­lius

• I agree with you. I also think that there are sev­eral rea­sons for that:

First that com­pet­i­tive games are (in­tel­lec­tual or phys­i­cal sports) eas­ier to se­lect and train for, since the ob­jec­tive func­tion is much clearer.

The other rea­son is more cul­tural: if you train your child for some­thing more use­ful like sci­ence or math­e­mat­ics, then peo­ple will say: “Poor kid, do you try to make a freak out of him? Why can’t he have a child­hood like any­one else?” Tra­di­tion­ally, there is much less op­po­si­tion against mu­sic, art or sport train­ing. Per­haps they are viewed as “fun ac­tivi­ties.”

Thirdly, it also seems that aca­demic suc­cess is the func­tion of more vari­ables: com­mu­ni­ca­tion skills, mo­ti­va­tion, per­spec­tive, taste, wis­dom, luck etc. So early train­ing will re­sult in much less head start than in a more con­strained area like sports or mu­sic, where it is al­most manda­tory for suc­cess (age of 10 (even 6) are al­most too late in some of those ar­eas to be­gin se­ri­ously)

• I found the most con­densed essence (also par­ody) of re­li­gious ar­gu­ments for fatal­ism in Greg Egan’s Per­mu­ta­tion City:

Even though I know God makes no differ­ence. And if God is the rea­son for ev­ery­thing, then God in­cludes the urge to use the word God. So when­ever I gain some strength, or com­fort, or mean­ing, from that urge, then God is the source of that strength, that com­fort, that mean­ing. And if God—while mak­ing no differ­ence—helps me to ac­cept what’s go­ing to hap­pen to me, why should that make you sad?

Log­i­cally ir­refutable, but ut­terly vac­u­ous...

• You should not take the state­ment too liter­ally: Look it in a his­tor­i­cal con­text. Prob­a­bly the biggest prob­lems at Rus­sel’s time were wars caused by na­tion­al­ism and un­fair re­source al­lo­ca­tion due to bad (ideal­is­tic/​tra­di­tion­al­ist) poli­cies.. Aver­age life ex­pec­tancy was around 40-50 years. I don’t think any­one con­sid­ered e.g. a mor­tal­ity a prob­lem that can or should be solved. (Nei­ther does over 95% of the peo­ple to­day). Pop­u­la­tion was much smaller. Earth was also in a much more pris­tine state than to­day.

Times have changed. We have more tech­ni­cal is­sues to­day, since we can ad­dress more is­sues with tech­nol­ogy, plus we are on a gen­eral tra­jec­tory to­day, which is ecolog­i­cally un­sus­tain­able if we don’t man­age to in­vent and use the right tech­nolo­gies quickly. I think this is the fun­da­men­tal mis­take tra­di­tional ecolog­i­cal move­ments are mak­ing: There is no turn­ing back. We ei­ther man­age to progress rapidly enough to coun­ter­act what we broke (and will in­evitably break) or our civ­i­liza­tion col­lapses. There is no stop­ping or turn­ing back, we have already bet our fu­ture. Be­ing rea­son­able would have worked 100 years ago, to­day we must be very smart as well.

• Did you try GNU Go? That should be hard enough for most be­gin­ners.

The prob­lem with GNUgo is that it teaches a style that would not be effec­tive in beat­ing hu­mans. Gen­er­ally, you have to build up mod­er­ately difficult situ­a­tions, where you have a deep se­quence of forc­ing moves. Th­ese kind of deep but sim­ple to prune trees are very eas­ily read by hu­mans, but GNUgo sucks at them, es­pe­cially if they are on the in­ter­ac­tion bound­ary of big­ger fights.

Still it can be valuable learn­ing tool, but one will learn a differ­ent skill set to play­ing with hu­mans.

• I think it is mostly hope­less try­ing to teach ra­tio­nal­ity to most peo­ple.

For ex­am­ple, both of my par­ents stud­ied Math in uni­ver­sity and still have a very firm grip of the fun­da­men­tals.

I just got a phone call yes­ter­day from my father in Ger­many say­ing: “We saw in the news, that a Ger­man tourist cou­ple got kil­led in a shoot­ing in San Fran­cisco. Will you avoid go­ing out af­ter dark?” When I tried to ex­plain that I won’t up­date my risk es­ti­mates based on any such sin­gu­lar event, he seemed to listen to and un­der­stand for­mally what I said. Any­how, he was com­pletely unim­pressed, finish­ing the con­ver­sa­tion in an even more wor­ried tone: “I see, but you will take care, won’t you?”

• That’s true.

But the par­allel was a bit more spe­cific: “Good sense of hu­mor” (which he con­cretely brought up as the most typ­i­cal ex­am­ple) is an at­tribute one can eas­ily claim to have as it is im­pos­si­ble to mea­sure.

• You missed my point. (Which you would not have, if you had read The Up­side of Ir­ra­tional­ity by Dan Ariely, an ex­cel­lent book.)

• This sounds like “I am not very good look­ing, but have a great sense of hu­mor.” ;)