# Deleting paradoxes with fuzzy logic

You’ve all seen it. Sen­tences like “this sen­tence is false”: if they’re false, they’re true, and vice versa, so they can’t be ei­ther true or false. Some peo­ple solve this prob­lem by do­ing some­thing re­ally com­pli­cated: they in­tro­duce in­finite type hi­er­ar­chies wherein ev­ery sen­tence you can ex­press is given a “type”, which is an or­di­nal num­ber, and ev­ery sen­tence can only re­fer to sen­tences of lower type. “This sen­tence is false” is not a valid sen­tence there, be­cause it refers to it­self, but no or­di­nal num­ber is less than it­self. Eliezer Yud­kowsky men­tions but says lit­tle about such things. What he does say, I agree with: ick!

In ad­di­tion to the sheer icky fac­tor in­volved in this com­pli­cated method of mak­ing sure sen­tences can’t re­fer to them­selves, we have deeper prob­lems. In English, sen­tences can re­fer to them­selves. Heck, this sen­tence refers to it­self. And this is not a flaw in English, but some­thing use­ful: sen­tences ought to be able to re­fer to them­selves. I want to be able to write stuff like “All com­plete sen­tences writ­ten in English con­tain at least one vowel” with­out hav­ing to write it in Span­ish or as an in­com­plete sen­tence.1 How can we have self-refer­en­tial sen­tences with­out hav­ing para­doxes that re­sult in the uni­verse do­ing what cheese does at the bot­tom of the oven? Easy: use fuzzy logic.

Now, take a nice look at the sen­tence “this sen­tence is false”. If your in­tu­ition is like mine, this sen­tence seems false. (If your in­tu­ition is un­like mine, it doesn’t mat­ter.) But ob­vi­ously, it isn’t false. At least, it’s not com­pletely false. Of course, it’s not true, ei­ther. So it’s not true or false. Nor is it the myth­i­cal third truth value, clem2, as clem is not false, mak­ing the sen­tence in­deed false, which is a para­dox again. Rather, it’s some­thing in be­tween true and false—”of medium truth”, if you will.

So, how do we rep­re­sent “of medium truth” for­mally? Well, the ob­vi­ous way to do that is us­ing a real num­ber. Say that a com­pletely false sen­tence has a truth value of 0, a com­pletely true sen­tence has a truth value of 1, and the things in be­tween have truth val­ues in be­tween.3 Will this work? Why, yes, and I can prove it! Well, no, I ac­tu­ally can’t. Still, the fol­low­ing, trust me, is a the­o­rem:

Sup­pose there is a set of sen­tences, and there are N of them, where N is some (pos­si­bly in­finite) car­di­nal num­ber, and each sen­tence’s truth value is a con­tin­u­ous func­tion of the other sen­tences’ truth val­ues. Then there is a con­sis­tent as­sign­ment of a truth value to ev­ery sen­tence. (More tersely, ev­ery con­tin­u­ous func­tion [0,1]^N → [0,1]^N for ev­ery car­di­nal num­ber N has at least one fixed point.)

So for ev­ery set of sen­tences, no mat­ter how wonky their self- and cross-refer­ences are, there is some con­sis­tent as­sign­ment of truth val­ues to them. At least, this is the case if all their truth val­ues vary con­tin­u­ously with each other. This won’t hap­pen un­der strict in­ter­pre­ta­tions of sen­tences such as “this sen­tence’s truth value is less than 0.5”: this sen­tence, in­ter­preted as black and white, has a truth value of 1 when its truth value is be­low 0.5 and a truth value of 0 when it’s not. This is in­con­sis­tent. So, we’ll ban such sen­tences. No, I don’t mean ban sen­tences that re­fer to them­selves; that would just put us back where we started. I mean we should ban sen­tences whose truth val­ues have “jumps”, or dis­con­ti­nu­ities. The sen­tence “this sen­tence’s truth value is less than 0.5” has a sharp jump in truth value at 0.5, but the sen­tence “this sen­tence’s truth value is sig­nifi­cantly less than 0.5″ does not: as its truth value goes down from 0.5 down to 0.4 or so, it also goes up from 0.0 up to 1.0, leav­ing us a con­sis­tent truth value for that sen­tence around 0.49.

Edit: I ac­ci­den­tally said “So, we’ll not ban such sen­tences.” That’s al­most the op­po­site of what I wanted to say.

Now, at this point, you prob­a­bly have some ideas. I’ll get to those one at a time. First, is all this truth value stuff re­ally nec­es­sary? To that, I say yes. Take the sen­tence “the Lean­ing Tower of Pisa is short”. This sen­tence is cer­tainly not com­pletely true; if it were, the Tower would have to have a height of zero. It’s not com­pletely false, ei­ther; if it were, the Tower would have to be in­finitely tall. If you tried to come up with any bi­nary as­sign­ment of “true” and “false” to sen­tences such as these, you’d run into the Sorites para­dox: how tall would the Tower be if any taller tower were “tall” and any shorter tower were “short”? A tower a mil­lime­ter higher than what you say would be “tall”, and a tower a mil­lime­ter shorter would be “short”, which we find ab­surd. It would make a lot more sense if a change of height of one mil­lime­ter sim­ply changed the truth value of “it’s short” by about 0.00001.

Se­cond, isn’t this just prob­a­bil­ity, which we already know and love? No, it isn’t. If I say that “the Lean­ing Tower of Pisa is ex­tremely short”, I don’t mean that I’m very, very sure that it’s short. If I say “my mother was half Ir­ish”, I don’t mean that I have no idea whether she was Ir­ish or not, and might find ev­i­dence later on that she was com­pletely Ir­ish. Truth val­ues are sep­a­rate from prob­a­bil­ities.

Third and fi­nally, how can this be treated for­mally? I say, to heck with it. Say­ing that truth val­ues are real num­bers from 0 to 1 is suffi­cient; re­gard­less of whether you say that “X and Y” is as true as the product of the truth val­ues of X and Y or that it’s as true as the less true of the two, you have an op­er­a­tion that be­haves like “and”. If two peo­ple have differ­ent in­ter­pre­ta­tions of truth val­ues, you can feel free to just add more func­tions that con­vert be­tween the two. I don’t know of any “laws of truth val­ues” that fuzzy logic ought to con­form to. If you come up with a set of laws that hap­pen to work par­tic­u­larly well or be par­tic­u­larly el­e­gant (per­centiles? deci­bels of ev­i­dence?), feel free to make it known.

1. ^ The term “sen­tence frag­ment” is con­sid­ered poli­ti­cally in­cor­rect nowa­days due to protests by in­com­plete sen­tences. “Only a frag­ment? Not us! One of us stand­ing alone? Noth­ing wrong with that!”

2. ^ I made this word up. I’m so proud of it. Don’t you think it’s cute?

3. ^ Sorry, Eliezer, but this can­not be con­sis­tently in­ter­preted such that 0 and 1 are not valid truth val­ues: if you did that, then the mod­est sen­tence “this sen­tence is at least some­what true” would always be truer than it­self, whereas if 1 is a valid truth value, it is a con­sis­tent truth value of that sen­tence.

• The crisp por­tion of such a self-refer­ence sys­tem will be equiv­a­lent to a Kripke fixed-point the­ory of truth, which I like. It won’t be the least fixed point, how­ever, which is the one I pre­fer; still, that should not in­terfere with the nor­mal math­e­mat­i­cal rea­son­ing pro­cess in any way.

In par­tic­u­lar, the crisp sub­set which con­tains only state­ments that could safely oc­cur at some level of a Tarski hi­er­ar­chy will have the truth val­ues we’d want them to have. So, there should be no com­plaints about the sys­tem com­ing to wrong con­clu­sions, ex­cept where prob­le­mat­i­cally self-refer­en­tial sen­tences are con­cerned (sen­tences which are as­signed no truth value in the least fixed point).

So; the ques­tion is: do the sen­tences which are as­signed no truth value in Kripke’s con­struc­tion, but are as­signed real-num­bered truth val­ues in the fuzzy con­struc­tion, play any use­ful role? Do they add math­e­mat­i­cal power to the sys­tem?

For those not fa­mil­iar with Kripke’s fixed points: ba­si­cally, they al­low us to use self-refer­ence, but to say that any sen­tence whose truth value de­pends even­tu­ally on its own truth value might be truth-value-less (ie, mean­ingless). The least fixed point takes this to be the case when­ever pos­si­ble; other fixed points may as­sign truth val­ues when it doesn’t cause trou­ble (for ex­am­ple, al­low­ing “this sen­tence is true” to have a value).

If dis­course about the fuzzy value of (what I would pre­fer to call) mean­ingless sen­tences adds any­thing, then it is by virtue of al­low­ing struc­tures to be defined which could not be defined oth­er­wise. It seems that adding fuzzy logic will al­low us to define “es­sen­tially fuzzy” struc­tures… con­cepts which are fun­da­men­tally ill-defined… but in terms of the crisp struc­tures that arise, cor­rect me if I’m wrong, but it seems fairly clear to me that noth­ing will be added that couldn’t be added just as well (or, bet­ter) by adding talk about the class of real-val­ued func­tions that we’d be us­ing for the fuzzy truth-func­tions.

To sum up: rea­son­ing in this way seems to have no bad con­se­quences, but I’m not sure it is use­ful...

• By the way, how would you in­cor­po­rate prob­a­bil­ities into bi­nary logic? Either you can in­clude state­ments about prob­a­bil­ities in bi­nary logic (“prob­a­bil­ity on top of logic”), or you can as­sign prob­a­bil­ities to bi­nary logic state­ments (“logic on top of prob­a­bil­ity the­ory”). The situ­a­tion is just analo­gous to that of fuzzi­ness. If you do #1, that means bi­nary logic is the most fun­da­men­tal layer. If you do #2, I can also do an analo­gous thing with fuzzi­ness.

• The rules of prob­a­bil­ity re­duce to the rules of bi­nary logic when the prob­a­bil­ities are all zero or one, so you get bi­nary logic for free just by us­ing prob­a­bil­ity.

• Yes, we all know that ;)

But un­der this ap­proach the bi­nary logic is NOT op­er­at­ing at a fun­da­men­tal level—it is sub­sumed by a prob­a­bil­ity the­ory. In other words, what is true in the bi­nary logic is not re­ally true; it de­pends on the prob­a­bil­ity as­signed to the state­ment, which is ex­ter­nal to the logic. In like man­ner, I can as­sign fuzzy val­ues to a bi­nary logic which are ex­ter­nal to the bi­nary logic.

• It’s good that you pointed out Kripke’s fixed point the­ory of truth as a solu­tion to the Liar’s para­dox. It seems to be an ac­cept­able solu­tion.

On the other hand, I also agree that “fuzzi­ness as a mat­ter of de­gree” can be added on top of a bi­nary logic. That would be very use­ful for deal­ing with com­mon­sense rea­son­ing—per­haps even in­dis­pens­able.

What is par­tic­u­larly con­tro­ver­sial is whether turth should be re­garded as a mat­ter of de­gree, ie, the de­vel­op­ment of a fuzzy-val­ued logic. At this point, I am kinda 50-50 about it. The ad­van­tage of do­ing this is that we can trans­late com­mon­sense no­tions eas­ily, and it may be more in­tu­itive to de­sign and im­ple­ment the AGI. The dis­ad­van­tage is that we need to deal with a rel­a­tively new form of logic (ie, many-val­ued logic) and its for­mal se­man­tics, proof the­ory, model the­ory, de­duc­tion al­gorithms, etc. With bi­nary logic we may be on firmer ground.

• YKY,

The prob­lem with Kripke’s solu­tion to the para­doxes, and with any solu­tion re­ally, is that it still con­tains refer­ence holes. If I strictly ad­here to Kripke’s sys­tem, then I can’t ac­tu­ally ex­plain to you the idea of mean­ingless sen­tences, be­cause it’s always ei­ther false or mean­ingless to claim that a sen­tence is mean­ingless. (False when we claim it of a mean­ingful sen­tence; mean­ingless when we claim it of a mean­ingless one.)

With the fuzzy way out, the refer­ence gap is that we can’t have dis­con­tin­u­ous func­tions. This means we can’t ac­tu­ally talk about the fuzzy value of a state­ment: any claim “This state­ment has value X” is a dis­con­tin­u­ous claim, with value 1 at X and value 0 ev­ery­where else. In­stead, all we can do is get ar­bi­trar­ily close to say­ing that, by hav­ing con­tin­u­ous func­tions that are 1 at X and fall off sharply around X… this, I ad­mit, is rather nifty, but it is still a refer­ence gap. War­ri­gal refers to ac­tual val­ues when de­scribing the logic, but the logic it­self is in­ca­pable of do­ing that with­out run­ning into para­dox.

• About the so-called “dis­con­tin­u­ous truth val­ues”, I think the culprit is not that the truth value is dis­con­tin­u­ous (it doesn’t make sense to say a point-value is con­tin­u­ous or not), but rather that we have a bi­nary pred­i­cate, “less-than”, which is a dis­con­tin­u­ous truth func­tional map­ping.

The state­ment “less-than(tv, 0.5)” seems to be a bi­nary state­ment. If we make that pred­i­cate fuzzy, it be­comes “ap­prox­i­mately less than 0.5″, which we can vi­su­al­ize as a sig­moidal curve, and this curve in­ter­sects with the slope=1 line at 0.5. Thus, the truth value of the fuzzy ver­sion of that state­ment is 0.5, ie, in­de­ter­mi­nate.

All in all, this prob­lem seems to stem from the fact that we’ve in­tro­duced the bi­nary pred­i­cate “less-than”.

• If I strictly ad­here to Kripke’s sys­tem, then I can’t ac­tu­ally ex­plain to you the idea of mean­ingless sen­tences, be­cause it’s always ei­ther false or mean­ingless to claim that a sen­tence is mean­ingless. (False when we claim it of a mean­ingful sen­tence; mean­ingless when we claim it of a mean­ingless one.)

I’d like to clear this up for my­self. You’re say­ing that un­der Kripke’s sys­tem we build up a tower of mean­ingful state­ments with in­finitely many floors, start­ing from “grounded” state­ments that don’t men­tion truth val­ues at all. All state­ments out­side the tower we deem mean­ingless, but state­ments of the form “state­ment X is mean­ingless” can only be­come grounded as true af­ter we finish the whole tower, so we aren’t sup­posed to make them.

But this looks weird. If we can log­i­cally see that the state­ment “this state­ment is true” is mean­ingless un­der Kripke’s sys­tem, why can’t we run this logic un­der that sys­tem? Or am I con­fus­ing lev­els?

• Call it “ex­pected” truth, analagous to “ex­pected value” in prob and stats. It’s effec­tively a way to in­cor­po­rate a risk anal­y­sis into your rea­son­ing.

• Yes, I have worked out a fuzzy logic with prob­a­bil­ity dis­tri­bu­tions over fuzzy val­ues.

• I think the origi­nal post is not spe­cific enough to be use­ful.

I see two es­sen­tial moot points:

1) Why should be there a sys­tem of con­tin­u­ous cor­re­spon­dences be­tween the truth val­ues of sen­tences that have to do any­thing with some in­tu­itive no­tion of truth val­ues?

2) Are the truth val­ues (of the sen­tences) af­ter tak­ing the fix point ac­tu­ally use­ful? E.g. can’t it be that we end up truth val­ues of 12 for al­most ev­ery sen­tence we can come up?

Be­fore these points are cleared, the origi­nal post is merely an ex­tremely vague spec­u­la­tion.

A closely re­lated analogue to the sec­ond is­sue: in NP-hard op­ti­miza­tion prob­lems with lot of {0,1} vari­ables, it’s a most com­mon prob­lem that af­ter a con­tin­u­ous re­lax­ation the sys­tem is eas­ily (polyno­mi­ally) solv­able but the solu­tion is worth­less as a large frac­tion of the vari­ables end up to be half which ba­si­cally says: “no in­for­ma­tion”.

• 1) Why should be there a sys­tem of con­tin­u­ous cor­re­spon­dences be­tween the truth val­ues of sen­tences that have to do any­thing with some in­tu­itive no­tion of truth val­ues?

I can’t figure out what you’re try­ing to ask here.

2) Are the truth val­ues (of the sen­tences) af­ter tak­ing the fix point ac­tu­ally use­ful? E.g. can’t it be that we end up truth val­ues of 12 for al­most ev­ery sen­tence we can come up?

I sup­pose the best an­swer I can give to this is “maybe”. If the logic op­er­a­tions that you use are 1-x, min(x,y), and max(x,y), and the sen­tences are en­tirely base­less (i.e. no sen­tence can be calcu­lated in­de­pen­dent of all the oth­ers), then a truth value of 12 for ev­ery­thing will always be con­sis­tent. If your sen­tences hap­pen to ac­tu­ally form a hi­er­ar­chy where sen­tences can only talk about sen­tences lower down, fuzzy logic will give a good an­swer.

The NP-hard op­ti­miza­tion thing you cite is in­ter­est­ing; do you have a link?

Fi­nally, in my defense, the pur­pose of this post was mainly to ad­vo­cate for the use of fuzzy logic through the in­sight that it re­solves para­doxes in a man­ner much more el­e­gant than or­di­nal type hi­er­ar­chy thin­gies, men­tion­ing that fuzzy logic seems to be the only good way to deal with sub­jec­tive things such as tal­l­ness and beauty any­way.

• To 1):

I sus­pected, but was not sure whether you meant the stan­dard min/​max re­lax­ation of log­i­cal op­er­a­tors. You could have had more elab­o­rate plans (I could not rule out) that could have lead to un­ex­pected in­ter­est­ing con­se­quences, but this is highly spec­u­la­tive. An analogue again from com­bi­na­to­rial op­ti­miza­tion: Mov­ing away from lin­ear (es­sen­tially min/​max based) re­lax­ations to semidefinite ones could non-triv­ially im­prove the perfor­mance of col­or­ing and SAT-solv­ing al­gorithms, at least asymp­tot­i­cally.

“The NP-hard op­ti­miza­tion thing you cite is in­ter­est­ing; do you have a link?”

This is a very well known prac­ti­cal folk­lore knowl­edge in that area, not ex­plic­itly topic of pub­li­ca­tions, rather part of the in­tro­duc­tory train­ing. If you want to have a closer look, search for ran­dom­ized round­ing, which is well es­tab­lished tech­nique and can yield good re­sults for cer­tain prob­lem classes, but may flop for oth­ers, ex­actly due to the above men­tioned dom­i­nance of frac­tional solu­tions (in­te­ger/​de­ci­sion-vari­ables tak­ing half(-in­tegeter) val­ues be­ing the typ­i­cal case.) E.g. un­der­grad­u­ate course ma­te­ri­als on the trav­el­ing-sales­man-prob­lem have con­crete ex­am­ples of that is­sue oc­cur­ring in prac­tice.

• I want to be able to write stuff like “All com­plete sen­tences writ­ten in English con­tain at least one vowel” with­out hav­ing to write it in Span­ish or as an in­com­plete sen­tence.

Nit­pick: Since this sen­tence doesn’t re­fer to the truth value of any English sen­tence, you’d still be able to write it even if you were us­ing type hi­er­ar­chies or the like. I think.

• I might be miss­ing some­thing, but it seems as if you’re need­lessly com­pli­cat­ing the situ­a­tion.

First of all, I’m not con­vinced that sen­tences ought to be able to self refer­ence. The ex­am­ple you give, “All com­plete sen­tences writ­ten in English con­tain at least one vowel” isn’t nec­es­sar­ily self-refer­enc­ing. It’s stat­ing a rule whic is in­evitably true, and which it hap­pens to con­form to. I could equally well say “All good sen­tences must at least one verb.” This is not a good sen­tence, but it does com­mu­ni­cate a gram­mat­i­cal rule.

But none of this has a pri­ori truth—they just hap­pen to con­form to ac­cepted stan­dards—and I don’t think they demon­strate the use­ful­ness of self-refer­enc­ing. English gram­mar al­lows you to self-refer­ence, but defin­ing “Cat (n): a cat” is a tau­tol­ogy. English also al­lows you to ask the ques­tion “What hap­pened be­fore time be­gan?” and while that is a perfectly valid sen­tence, it’s a mean­ingless ques­tion.

As a corol­lary, math­e­mat­i­cal no­ta­tion al­lows me to write “2+2=5” (note—the per­son who writes this down isn’t claiming that 2+2=5, she is far bet­ter versed than Aur­ini in the rea­sons it equals 4, she is just demon­strat­ing that she can write down non­sense). This doesn’t re­quire a defense of ar­ith­metic; it’s sim­ple enough to point out that the equa­tion is non­sense.

“This sen­tence is false.” “What hap­pened be­fore time?” “My pet elephant that I named Ge­orge doesn’t ex­ist.” I don’t see that a re­but­tal is nec­es­sary, mean­ingful, or even pos­si­ble in these situ­a­tions. It’s enough to say “That’s stupid,” and move on to some­thing in­ter­est­ing.

• Warn­ing, nit­picks fol­low:

The sen­tence “All good sen­tences must at least one verb.” has at least one verb. (It’s an aux­iliary verb, but it’s still a verb. Ob­vi­ously this doesn’t make it good; but it does de­tract from the point some­what.)

“2+2=5” is false, but it’s not non­sense.

• On the topic of fuzzy logic: Is there a se­man­tics for fuzzy logic in which the fuzzy truth value of the state­ment “pred­i­cate P is true of ob­ject x” is the ex­pected value of P(x) af­ter marginal­iz­ing out a prior be­lief dis­tri­bu­tion over pos­si­ble hid­den crisp defi­ni­tions of P?

• Yes, be­cause prob­a­bil­is­tic logic is a spe­cial case of fuzzy logic. (The phrase “spe­cial case” is odd, be­cause you could ar­gue that it’s sim­ply the cor­rect case, and all oth­ers are wrong.)

• I guess not. The point is that “mat­ters of de­gree” are in­her­ently differ­ent from prob­a­bil­ities, and the former can­not be re­duced to the lat­ter. To best clar­ify this point, we need a for­mal se­man­tics of fuzzy logic (where fuzzi­ness is treated as mat­ters of de­gree). I’m not sure if there’s such re­search in the liter­a­ture, I’ll have a look when I have time...

• I’m not sure they are in­her­ently differ­ent. I read Kosko’s pop­u­lar book on Fuzzy Logic many years ago and can’t re­mem­ber the de­tails of the ar­gu­ment, but he claimed that prob­a­bil­is­tic logic is a spe­cial case of fuzzy logic, as propo­si­tional logic is a spe­cial case of prob­a­bil­is­tic logic (ie, with prob­a­bil­ities of 0 and 1).

• Sev­eral things. First, you’re claiming “prob­a­bil­is­tic is a spe­cial case of fuzzy” but that does not im­ply “fuzzy is a spe­cial case of prob­a­bil­is­tic” which was the origi­nal point of con­tention.

Se­condly, you prob­a­bly have con­fused fuzzy logic with “pos­si­bil­ity the­ory”. There can be many types of fuzzy logic, and the is­sue we’re cur­rently de­bat­ing is whether “truth” can be re­garded as a mat­ter of de­gree, ie, fuzzi­ness as de­gree of truth. Pos­si­bil­ity the­ory is a spe­cial type of fuzzy the­ory which re­sults from giv­ing up the prob­a­bil­ity ax­iom #3, “finite ad­di­tivity”. That is prob­a­bly what your au­thor is refer­ring to.

• It oc­curs to me as a “No­tion” that . . .

To for­mu­late fuzzy logic in a boolean top do­main en­vi­ron­ment, I think you would need to use a prob­a­bil­is­tic wave form type ex­pla­na­tion. Or else just treat fuzzy logic as a con­di­tional mul­ti­plier on any boolean truth value. To en­cap­su­late a boolean or strict logic sys­tem into fuzzy logic is triv­ial and evolv­ing. You could start with just adding a per­centage based on some com­plex crite­ria to any log­i­cal tau­tol­ogy or con­tra­dic­tion. By de­fault the truth axis of a fuzzy logic de­ci­sion or logic tree is go­ing to be knows for some classes of logic sys­tems. When used for mak­ing real world de­ci­sions in the con­text of tak­ing ac­tion in a de­ci­sion or vote, the “rele­vance” value of a fuzzy logic based de­ci­sion branch would be 0% rele­vant for a “con­tra­dic­tion logic” and 100% rele­vant for a “tau­tol­ogy logic”? So in the real world we don’t con­sider con­tra­dic­tions when we hu­mans use fuzzy logic to de­cide on a course of ac­tion. the de­fault truth and rele­vance value of any con­tra­dic­tion in Fuzzy logic is zero un­til voted oth­er­wise or ad­justed through some mechanism.

Or maybe this doesn’t make sense? sorry if this post is a lit­tle con­fused and I haven’t thought about these ideas un­til just now for this dis­cus­sion. Thanks for your time and let me know if it wasn’t worth your time if it both­ered you please. I don’t want to be a bother so just ask and I’ll go away if you pre­fer. Thx.

• If all true state­ments are defined as non-con­tra­dic­tory, then you can ask more mean­ingful fuzzy logic ques­tions about the rele­vance of sev­eral tau­tolo­gies for ap­ply­ing to a spe­cific real world phe­nom­ena. To do this you need a sur­vey or poll of the en­vi­ron­ment and a sur­vey or poll for de­ter­min­ing how much the teu­tolo­gies mat­ter. For ex­am­ple.

Con­sider the fol­low­ing boolean true false claims we hold to be true and con­sider their rele­vance for use in lo­cat­ing hu­mans statis­ti­cally: our first rule or fuzzy logic heuris­tic is to take the first tau­tol­ogy that seems rele­vant and ap­ply it to see if it matches re­sults.

For ex­am­ple con­sider these specefic logic sys­tems:

1) The com­plete the­ory of grav­ity as dis­cov­ered by Neu­ton de­ter­mines that statis­ti­cally hu­mans have mass and den­sity ap­prox­i­mately equal to wa­ter. Grav­ity com­bined with den­sity pre­dicts that hu­mans should be lo­cated in a re­gion of space cen­tered on the grav­i­ta­tional cen­ter of the planet and evenly dis­tributed in a sphere with all air above ev­ery hu­man and all solid mat­ter be­low hu­mans. Ev­i­dence of hu­mans liv­ing un­der­ground or fly­ing on air­planes above any air or with air sep­a­rat­ing hu­mans from the cen­ter of grav­ity is a vi­o­la­tion of this the­ory of grav­ity and den­sity and ran­dom dis­tri­bu­tion math­e­mat­ics.

2) The in­com­plete the­ory of plate tec­ton­ics and ge­og­ra­phy de­ter­mines that peo­ple in some places their will be air closer to the cen­ter of grav­ity than at other places. The ideal­ized sphere dis­tri­bu­tion of hu­mans has bumps and valleys caused by plate tech­ton­ics as­serts that some hu­mans on moun­tain tops will be at lo­cal equil­ibrium above other air molecules in valleys. Hu­mans at one lat­i­tude and lon­gi­tude can be above the air in some other lat­i­tude and lon­gi­tude.

3) The in­com­plete the­ory of hu­man be­hav­ior says that hu­mans can move and defy uniform dis­tri­bu­tion rules about their statis­ti­cal prob­a­bil­is­tic lo­ca­tion rel­a­tive to the cen­ter of grav­ity. Peo­ple go into rocket ships and can even be found above the air which is in com­plete con­tra­dic­tion to the the­ory of grav­ity con­sid­ered as the only teu­tol­ogy the­ory of rele­vance.

4) The the­ory of ge­om­e­try and an­gu­lar mo­men­tum com­bined with grav­ity proves con­clu­sively that hu­mans must be lo­cated ex­clu­sively in a squished sphere shape dis­tri­bu­tion (oblate) with their dis­tance from the the cen­ter of grav­ity de­ter­mined solely by their rel­a­tive den­sity com­pared to the rest of the ma­te­rial in the the plane­tary space un­der con­sid­er­a­tion.

Con­clu­sion: not all of these ver­ifi­able true boolean state­ments are equally valuable and equally rele­vant. Some of them can be dis­carded or are more us­ably in­com­plete than oth­ers. Utility value of any logic sys­tem is de­ter­mined by the use case and boolean logic com­po­nents can be added and re­moved from con­sid­er­a­tion and time con­sum­ing calcu­la­tions based on their pre­dic­tive abil­ity for the par­tic­u­lar use case. To de­ter­mine, in this ex­am­ple, the most rele­vant and im­por­tant logic sys­tems I would like any­one who reads this to rank or­der the logic sys­tem choices from most rele­vant and use­ful to least rele­vant and use­ful. The dis­tri­bu­tion of your rank or­der vot­ing will de­ter­mine the util­ity value of the the mul­ti­ple non-con­tra­dic­tory tau­tol­ogy logic sys­tems com­par­a­tively. You may also add a 1 (one) new op­tion to this vot­ing poll on rele­vance and other can rank your ad­di­tional logic frame­work. When we have enough votes we start evolv­ing and delet­ing logic sys­tems from our poll un­til we have a high level of agree­ment or a sta­ble equil­ibrium of vot­ing dis­tri­bu­tion.

Boolean logic tells us what is pos­si­ble. Fuzzy logic tells us what is rele­vant and us­able.

• There should be a way to break this sys­tem. Let’s see...

“This sen­tence doesn’t have a con­sis­tent truth value.”

Did I win?

• No. The truth value of this sen­tence is 0.

• In your at­tempt to make a sen­tence be­hav­ing a cer­tain way, you made a sen­tence sim­ply de­scribing its be­hav­ior in­stead of one that ac­tu­ally be­haves in the re­quired man­ner. I nearly did that while writ­ing this: af­ter I wrote “this sen­tence is a lit­tle bit truer than it­self”, it took me what seemed way too long to come up with “this sen­tence is at least some­what true”.

• So it’s false. Which makes it self-con­tra­dic­tory, and there­fore false.

i think.

• This won’t hap­pen un­der strict in­ter­pre­ta­tions of sen­tences such as “this sen­tence’s truth value is less than 0.5”: this sen­tence, in­ter­preted as black and white, has a truth value of 1 when its truth value is be­low 0.5 and a truth value of 0 when it’s not. This is in­con­sis­tent. So, we’ll … ban sen­tences whose truth val­ues have “jumps”, or dis­con­ti­nu­ities.

This makes it sound like you have in­deed just rein­tro­duced types un­der an­other name, patch­ing “this state­ment is false” by for­bid­ding “this state­ment has truth value 0.0″.

• I think “this state­ment has truth value 0” is al­lowed.

ADDED: It has no dis­con­ti­nu­ity. It is the iter­ated sys­tem f(S) = 1 - S, and its fixed point is at S = .5.

• But it man­i­festly has a dis­con­ti­nu­ity. Would you pre­fer the equiv­a­lent “this state­ment’s truth value is less than ep­silon (ep­silon some in­finites­i­mal)”?

• Why does it have a dis­con­ti­nu­ity?

Folks, you shouldn’t vote down le­gi­t­i­mate ques­tions.

• Rele­vantly, be­cause it’s struc­turally iden­ti­cal to War­ri­gal’s sam­ple sen­tence, so what­ever defi­ni­tion War­ri­gal is us­ing (a perfectly stan­dard one, it seems to me) must ap­ply to both.

• It’s struc­turally iden­ti­cal to a sam­ple sen­tence that War­ri­gal used in de­scribing a differ­ent ap­proach, not the one he/​she is tak­ing.

If it man­i­festly has a dis­con­ti­nu­ity, you should be able to say where it is.

(In fact, it does not have a dis­con­ti­nu­ity. For not(S) = 1-S, it is com­pletely lin­ear: it is the iter­ated sys­tem f(S) = 1-f(S), hav­ing a fixed point at .5. )

• Okay, this is get­ting silly. War­ri­gal says “The sen­tence ‘this sen­tence’s truth value is less than 0.5’ has a sharp jump in truth value at 0.5, but the sen­tence ‘this sen­tence’s truth value is sig­nifi­cantly less than 0.5’ does not [and we will ban the first form]”. In the same way, my sen­tence “This sen­tence’s truth value is less than ep­silon” has a dis­con­ti­nu­ity at ep­silon. Both sen­tences make dis­con­tin­u­ous claims about their truth val­ues.

What is the “differ­ent ap­proach” that you claim this sen­tence is in refer­ence to?

(In­ci­den­tally, I agree with you that my sen­tence has a fixed point at 0.5 un­der War­ri­gal’s sys­tem. That’s why my origi­nal com­ment was crit­i­ciz­ing the pre­sen­ta­tion and not nec­es­sar­ily the con­tent of the the­ory.)

• The sen­tence we were dis­cussing was “This state­ment has truth value 0”. I as­sumed that when you said it was struc­turally iden­ti­cal to War­ri­gal’s sam­ple sen­tence, you were refer­ring to this pas­sage:

“This sen­tence is false” is not a valid sen­tence there, be­cause it refers to it­self, but no or­di­nal num­ber is less than it­self.

That sen­tence refers to the tra­di­tional ways around Rus­sell’s para­dox.

You seem to say dis­con­ti­nu­ity when you mean a non­con­tin­u­ous first deriva­tive.

• A con­jec­ture (seems easy to prove):

“If, in a fuzzy logic where truth val­ues range from [0,1], we al­low log­i­cal op­er­a­tors (which are maps from [0,1] to [0,1]) or pred­i­cates that does not in­ter­sect the slope=1 line, then we can always con­struct a Liar’s Para­dox.”

An ex­am­ple is the bi­nary pred­i­cate “less-than”, which has a dis­con­ti­nu­ity at 0.5 and hence does not in­ter­sect the y=x line.

• I want to be able to write stuff like “All com­plete sen­tences writ­ten in English con­tain at least one vowel”

Why?

• I want to be able to write stuff like “All com­plete sen­tences writ­ten in English con­tain at least one vowel” with­out hav­ing to write it in Span­ish or as an in­com­plete sen­tence.

Nit­pick: Since this sen­tence doesn’t re­fer to the truth value of any English sen­tence, you’d still be able to write it even if you were us­ing type hi­er­ar­chies or the like.

• Se­cond, isn’t this just prob­a­bil­ity, which we already know and love? No, it isn’t. If I say that “the Lean­ing Tower of Pisa is ex­tremely short”, I don’t mean that I’m very, very sure that it’s short. If I say “my mother was half Ir­ish”, I don’t mean that I have no idea whether she was Ir­ish or not, and might find ev­i­dence later on that she was com­pletely Ir­ish. Truth val­ues are sep­a­rate from prob­a­bil­ities.

Fuzzy logic is just sloppy prob­a­bil­ity, al­though Lofti Zadeh doesn’t re­al­ize it. (I heard him give a talk on it at NIH, and my sum­mary of his talk is: He in­vented fuzzy logic be­cause he didn’t un­der­stand how to use prob­a­bil­ities. He ac­tu­ally said: “What if you ask 10 peo­ple if Bill is tall, and 4 of them say yes, but 6 of them say no? Prob­a­bil­ities have no way of rep­re­sent­ing this.”)

You can se­lect your “fuzzy logic” func­tions (the set of func­tions used to spec­ify a fuzzy logic, which say what value to as­sign A and B, A or B, and not A, as a func­tion of the val­ues of A and B) to be con­sis­tent with prob­a­bil­ity the­ory, and then you’ll always get the same an­swer as prob­a­bil­ity the­ory.

The rules for stan­dard prob­a­bil­ity the­ory are cor­rect. But “sloppy” fuzzy-logic prob­a­bil­ity func­tions, like “A or B = max(A,B); A and B = min(A,B); not(A) = 1-A”, have ad­van­tages when Bayesian logic gives lousy re­sults. Here are 2 situ­a­tions where fuzzy logic out­performs use of Bayes’ law:

1. You have in­com­plete or in­ac­cu­rate in­for­ma­tion. Say you are told that A and B have a cor­re­la­tion of 1: P(A|B) = P(B|A) = 1. By Bayes’ law, P(A^B) = P(AvB) = P(A) = P(B). Then you’re told that P(A) and P(B) are differ­ent. You’re then asked to com­pute P(A^B). Bayes law fails you, be­cause the facts you’ve been given are in­con­sis­tent. Fuzzy logic is a heuris­tic that lets you plow through the in­con­sis­tency: it en­forces p(AvB) >= p(A^B), when Bayes’ law just blows up.

2. You are a robot, mak­ing a plan. For ev­ery ac­tion you take, you have a prob­a­bil­ity of suc­cess that you always as­so­ci­ate with that ac­tion. You as­sume that the prob­a­bil­ity of suc­cess for each step in a plan is in­de­pen­dent of the other steps. But in re­al­ity, some­times they are highly cor­re­lated. Be­cause you as­sume prob­a­bil­ities are in­de­pen­dent, you strongly fa­vor short plans over long plans. Us­ing fuzzy logic al­lows you to con­struct longer plans.

Fuzzy logic is just a prag­matic com­pu­ta­tional tool. Noth­ing that’s go­ing to help you get around a para­dox, ex­cept in the sense that it will let you con­struct a model that’s in­ac­cu­rate enough that the para­dox dis­ap­pears from sight.

When you switch to us­ing these num­bers to differ­en­ti­ate be­tween “short” and “ex­tremely short”, that’s not prob­a­bil­ity. But then you’re no longer talk­ing about truth val­ues. You’re just mea­sur­ing things. The num­ber 17 is no more true than the num­ber 3.

All that said, the ap­proach you just de­scribed is in­ter­est­ing. I’m miss­ing some­thing, but it’s very late, so I’ll have to try to figure it out to­mor­row.

• You can se­lect your “fuzzy logic” func­tions (the set of func­tions used to spec­ify a fuzzy logic, which say what value to as­sign A and B, A or B, and not A, as a func­tion of the val­ues of A and B) to be con­sis­tent with prob­a­bil­ity the­ory, and then you’ll always get the same an­swer as prob­a­bil­ity the­ory.

How do you do this? As far as I un­der­stand, it is im­pos­si­ble since prob­a­bil­ity is not truth func­tional. For ex­am­ple, sup­pose A and B both have prob­a­bil­ity 0.5 and are in­de­pen­dent. In this case, the prob­a­bil­ity of ‘A^B’ is 0.25, while the prob­a­bil­ity of ‘A^A’ is 0.5. You can’t do this in a (truth-func­tional) logic, as it has to pro­duce the same value for both of these ex­pres­sions if A and B have the same truth value. This is why min­i­mum and max­i­mum are used.

• Cal­ling fuzzy logic “truth func­tional” sounds like you’re chang­ing the se­man­tics; but no­body re­ally changes the se­man­tics when they use these sys­tems. Fuzzy logic use of­ten be­comes a se­man­tic mud­dle, with peo­ple mak­ing the val­ues si­mul­ta­neously mean truth, prob­a­bil­ity, and mea­sure­ment; in­ter­pret­ing them in an ad-hoc man­ner.

You can tell your truth-func­tional logic that A^A = A. Or, you can tell it that P(A|A) = 1, so that p(A^A) = p(A).

• Cal­ling fuzzy logic “truth func­tional” sounds like you’re chang­ing the se­man­tics;

‘Truth func­tional’ means that the truth value of a sen­tence is a func­tion of the truth val­ues of the propo­si­tional vari­ables within that sen­tence. Fuzzy logic works this way. Prob­a­bil­ity the­ory does not. It is not just that one is talk­ing about de­grees of truth and the other is talk­ing about prob­a­bil­ities. The analogue to truth val­ues in prob­a­bil­ity the­ory are prob­a­bil­ities, and the prob­a­bil­ity of a sen­tence is not a func­tion of the prob­a­bil­ities of the vari­ables that make up that sen­tence (as I pointed out in the pre­ced­ing com­ment, when A and B have the same prob­a­bil­ity, but A^A has a differ­ent prob­a­bil­ity to A^B). Thus propo­si­tional fuzzy logic is in­her­ently differ­ent to prob­a­bil­ity the­ory.

You might be able to cre­ate a ver­sion of ‘fuzzy logic’ in which it is non truth-func­tional, but then it wouldn’t re­ally be fuzzy logic any­more. This would be like say­ing that there are ver­sions of ‘mam­mal’ where fish are mam­mals, but we have to un­der­stand ‘mam­mal’ to mean what we nor­mally mean by ‘an­i­mal’. Sure, you could rein­ter­pret the terms in this way, but the peo­ple who cre­ated the terms don’t use them that way, and it just seems to be a dis­trac­tion.

At least that is as far as I un­der­stand. I am not an ex­pert on non-clas­si­cal logic, but I’m pretty sure that fuzzy logic is always un­der­stood so as to be truth-func­tional.

• You might be able to cre­ate a ver­sion of ‘fuzzy logic’ in which it is non truth-func­tional, but then it wouldn’t re­ally be fuzzy logic any­more.

Eep, maybe I should edit my post so it doesn’t say “fuzzy logic”. Not that I know that non-truth-func­tional fuzzy logic is a good idea; I sim­ply don’t know that it isn’t.

• He ac­tu­ally said: “What if you ask 10 peo­ple if Bill is tall, and 4 of them say yes, but 6 of them say no? Prob­a­bil­ities have no way of rep­re­sent­ing this.”)

You may have mi­s­un­der­stood what Zadeh was say­ing. Sup­pose Bill is 5 feet, 9 inches in height and all ten peo­ple know this. I.e. we are not at­tempt­ing to rep­re­sent the like­li­hood that Bill is or is not tall based on the un­cer­tain ev­i­dence given by differ­ent peo­ple. It is not 60% likely that Bill is tall, and 40% likely that he is not. He is 5 feet, nine inches and ev­ery­one knows it. No one dis­agrees on his ac­tual, mea­sured height.

Now we could taboo the word tall, and we wouldn’t lose any in­for­ma­tion; and in some con­texts that might be the right thing to do. How­ever in prac­ti­cal, day-to-day life hu­mans do use words like tall that have fuzzy, non-crisp bound­aries. The truth value of a word like “tall” is bet­ter ex­pressed as a real num­ber than a boolean value. Fuzzy logic rep­re­sents the ap­par­ent dis­agree­ment on whether or not Bill is tall by say­ing he is 60% tall and 40% not tall.

• That isn’t what dis­t­in­guishes fuzzy logic from prob­a­bil­ities. Both would rep­re­sent this case with the num­ber 0.6. The dis­t­in­guish­ing fea­ture of fuzzy logic is that it uses non-prob­a­bil­is­tic func­tions to com­pute joint prob­a­bil­ities, to avoid var­i­ous prac­ti­cal and com­pu­ta­tional prob­lems.

• I think I’ve figured it out.

You have a set of equa­tions for p(X1), p(X2), etc., where

p(X1) = f1(p(X2), p(X3), … p(Xn))

p(X2) = f2(p(X1), p(X3), … p(Xn))

...

War­ri­gal is say­ing: This is a sys­tem of n equa­tions in n un­knowns. Solve it.

But this has noth­ing to do with whether you’re us­ing fuzzy logic!

If you define the func­tions f1, f2, … so that each cor­re­sponds to some­thing like

f1(p(X2), p(X3) , …) = p(X2 and (X3 or X4) … )

us­ing stan­dard prob­a­bil­ity the­ory, then you’re not us­ing fuzzy logic. If you define them some other way, you’re us­ing fuzzy logic. The ap­proach de­scribed lets us find a con­sis­tent as­sign­ment of prob­a­bil­ities (or truth-val­ues, if you pre­fer) ei­ther way.

• Is this re­ally the case?

In fuzzy logic, one re­quires that the real-num­bered truth value of a sen­tence is a func­tion of its con­stituents. This al­lows the “solve it” re­ply.

If we swap that for prob­a­bil­ity the­ory, we don’t have that any­more… in­stead, we’ve got the con­straints im­posed by prob­a­bil­ity the­ory. The real-num­bered value of “A & B” is no longer a definite func­tion F(val(A), val(B)).

Maybe this is only a triv­ial com­pli­ca­tion… but, I am not sure yet.

• When you switch to us­ing these num­bers to differ­en­ti­ate be­tween “short” and “ex­tremely short”, that’s not prob­a­bil­ity. But then you’re no longer talk­ing about truth val­ues.

That is in fact pre­cisely what I mean by “truth value”. What does “truth value” mean in your book?

• Then what does it mean in fuzzy-logic to say “The truth value of ‘Bill is 3 feet tall’ is .5” ?

• It means that Bill is pretty nearly 3 feet tall, but not ex­actly. Per­haps it means that for half of all prac­ti­cal pur­poses, Bill is 3 feet tall; that may be a good for­mal­iza­tion.

I’ll men­tion now that I don’t know if nor­mal treat­ments of fuzzy logic in­sists that truth value func­tions be con­tin­u­ous. Mine does, which may make it or­di­nary, sub­stan­dard, quirky, or in­sight­ful.

• I don’t un­der­stand this state­ment: “p(AvB) >= p(A^B), when Bayes’ law just blows up”.

p(AvB) >= p(A^B) should always be true shouldn’t it?
I know A^B → AvB is a tau­tol­ogy (p=1) and that the truth value of AvB → A^B de­pends on the val­ues of A and B; when trans­lated into prob­a­bil­ities show p(AvB) >= p(A^B) as true.

• If you’re told p(A|B) = 1, but are given differ­ent val­ues for p(A) and p(B), you can’t ap­ply Bayes’ law. Some­thing you’ve been told is wrong, but you don’t know what.

Note that the fuzzy logic rules given are a com­pro­mise be­tween A and B hav­ing cor­re­la­tion 1, and be­ing in­de­pen­dent.

• A con­jec­ture (seems easy to prove):

“If, in a fuzzy logic where truth val­ues range from [0,1], we al­low log­i­cal op­er­a­tors (which are maps from [0,1] to [0,1]) or pred­i­cates that does not in­ter­sect the slope=1 line, then we can always con­struct a Liar’s Para­dox.”

An ex­am­ple is the bi­nary pred­i­cate “less-than”, which has a dis­con­ti­nu­ity at 0.5 and hence does not in­ter­sect the y=x line.

• O...kay. It looks like you just de­cided to post the first thing on your head with­out con­cern for say­ing any­thing use­ful.

You come up with frac­tional val­ues for truth, but don’t think it’s nec­es­sary to say what a frac­tional truth value means, let alone for­mal­ize it.

You pro­pose the neato idea to use frac­tional truth val­ues to deal with state­ments like “this is tall”, and boost it with a way to ad­just such truth val­ues as height varies. Some­how you missed that we already have a way to han­dle such gra­da­tions; it’s called “units of mea­sure­ment”. We don’t need to say, “It’s 0.1 true that a foot­ball field is long”; we just say, “it’s true that a foot­ball field is 100 yards long.

Any­way, I thought I’d use this op­por­tu­nity to say some­thing use­ful. I was just read­ing Gary Drescher’s Good and Real (dis­cussed here be­fore), where he gives the most far-reach­ing, bold re­sponse to the claim that Goedel’s the­o­rem proves limi­ta­tions to ma­chines, and I’m sur­prised the ar­gu­ment doesn’t show up more of­ten, and that he didn’t seem to have any­one to cite as hav­ing made it be­fore.

It goes like this: peo­ple claim that for­mal sys­tems are some­how limited in that they can’t “see” that Goedel state­ments of the form “This state­ment can’t be proven within the sys­tem” are true. Drescher at­tacks this at the root and says, that’s not a limi­ta­tion, be­cause the state­ment’s not true.

He ex­plains that you can’t ac­tu­ally rule out false­hood of the Goedel state­ment, as many peo­ple im­me­di­ately do. Be­cause it’s falsity still leaves room for the pos­si­bil­ity that “This state­ment has a proof, but it’s in­finitely long.” But then the sub­tle as­sump­tion that “This state­ment has a proof” im­plies “This state­ment is true” be­comes much more ten­u­ous. It’s far from ob­vi­ous why you must ac­cept as true a state­ment whose proof you can never com­plete.

Take that, Pen­rose!

• Silas, a sug­ges­tion which you can take or leave, as your pre­fer.

This com­ment makes some sound points, but IMHO, in an un­nec­es­sar­ily per­sonal way. Note the con­sis­tent use of the crit­i­cal “you”-based for­mu­la­tions (“you just de­cided”, “you come up with”, “you pro­pose”, “you missed that”). Con­trast this with Chris­tian’s com­ment, which is also crit­i­cal, but con­sis­tently fo­cuses on the ideas, rather than the per­son pre­sent­ing them.

I have no idea why you feel the need to throw about thinly-veiled ac­cu­sa­tions that War­ri­gal is ba­si­cally an idiot. (How else could he or she pos­si­bly have missed all these re­ally ob­vi­ous prob­lems you so in­sight­fully spot­ted?). Maybe you don’t even in­tend them as such (though I’m baf­fled as to how could you pos­si­bly miss the over­tones of your state­ments when they’re so freakin’ OBVIOUS). But the ten­dency to be­lit­tle oth­ers’ in­tel­lec­tual ca­pac­i­ties (rather than just their views) is one that you’ve ex­hibited on a num­ber of prior oc­ca­sions as well, and one that I think you would do well to try to over­come—if only so that oth­ers will be more re­cep­tive to your ideas.

PS. For the avoidance of doubt, that fi­nal para was in­tended in part as an ironic illus­tra­tion of the prob­lem. I’m not that un-self-aware.

PPS. Also, I didn’t vote you down.

• I agree that I’ve been many times un­nec­es­sar­ily harsh. But se­ri­ously, take a look at a ran­dom sam­pling of my posts and see how many of them are that way. It’s not ac­tu­ally as of­ten as you’re try­ing to im­ply.

I do it be­cause some peo­ple cross the thresh­old from “hon­est mis­take” into “not even try­ing”. In which case they need to know that too, not just the speci­fics of their er­ror. Hold­ing some­one’s hand through ba­sic ex­pla­na­tions is un­fair to the peo­ple who have to do the work that the ini­tial poster should have done for them­selves.

And FWIW, if any­one ever catches me in that po­si­tion—where I screw up so bad that I didn’t even ap­pear to be think­ing when I posted—I hope that you treat me the same way, so that I learn not just my spe­cific er­ror, but why it was so eas­ily avoid­able. Ar­guably, that’s the ap­proach you just took.

Now a sug­ges­tion for you: your com­ment was best com­mu­ni­cated by pri­vate mes­sage. Why stage a de­grad­ing, self-con­grat­u­la­tory “in­ter­ven­tion”? Un­less...

• Hold­ing some­one’s hand through ba­sic ex­pla­na­tions is un­fair to the peo­ple who have to do the work that the ini­tial poster should have done for them­selves.

What’s ob­vi­ous to one per­son is sel­dom ob­vi­ous to ev­ery­body else. There are things that seem ut­terly triv­ial to me that lots of peo­ple don’t get im­me­di­ately, and many more things that seem ut­terly triv­ial to oth­ers that I don’t get im­me­di­ately. That doesn’t mean that any of us aren’t try­ing, or de­serve to be be­lit­tled for “not get­ting it”. (I can’t quite tell if your sec­ond para­graph is in­tended as jus­tifi­ca­tion or merely ex­pla­na­tion; apolo­gies if I’ve guessed wrongly).

Why stage a de­grad­ing, self-con­grat­u­la­tory “in­ter­ven­tion”?

It wasn’t in­tended to be self-con­grat­u­la­tory; it was in­tended to make a point. Oh well. As for be­ing de­grad­ing, I was at­tempt­ing, via irony, to help you to un­der­stand the im­pact of a par­tic­u­lar style of com­ment. It’s a style that I would nor­mally try to avoid, and I agree that in gen­eral such com­ments might be bet­ter com­mu­ni­cated pri­vately, and cer­tainly in a less in­flam­ma­tory way. (In this case, it hon­estly didn’t oc­cur to me to send a pri­vate mes­sage. Not sure what I would have done if it had. I think the ex­tent to which oth­ers’ here agree or dis­agree with my point is use­ful in­for­ma­tion for us both, but in­for­ma­tion that would be lost if the cor­re­spon­dence were pri­vate.)

It’s not ac­tu­ally as of­ten as you’re try­ing to im­ply.

I’m not sure what you think I was try­ing to im­ply, but I had two spe­cific in­stances in mind (other than this one), and hon­estly wasn’t try­ing to im­ply any­thing be­yond that.

• What’s ob­vi­ous to one per­son is sel­dom ob­vi­ous to ev­ery­body else.

You’re preach­ing to the choir here. But when War­ri­gal an­nounces some grand new idea, but just shrugs of even the im­por­tance of spel­ling out its im­pli­ca­tions, that’s well be­yond “not notic­ing some­thing that’s ob­vi­ous to oth­ers” and into the ter­ri­tory of “not giv­ing a s---, but ex­pect­ing peo­ple to do your work for you.”

As for be­ing de­grad­ing, I was at­tempt­ing, via irony, to help you to un­der­stand the im­pact of a par­tic­u­lar style of com­ment.

Right. I “got” that the first time around (even be­fore PS), thanks. That wasn’t what I was refer­ring to as “de­grad­ing”; it was ac­tu­ally pretty clever. Good work!

The de­grad­ing bit was where you do the in­ter­net equiv­a­lent of call­ing some­one out in pub­lic, and then go­ing through your ac­cu­mu­lated list of their flaws, so any­one else who doesn’t like the res­i­dent “bad guy” (guy who ac­tu­ally says what ev­ery­one else isn’t will­ing to take the karma hit for) can join the pile-on.

In this case, it hon­estly didn’t oc­cur to me to send a pri­vate mes­sage.

Sure, be­cause what you were try­ing to ac­com­plish (self-pro­mo­tion, “us vs. them”)wouldn’t have been satis­fied by a pri­vate mes­sage, so of course it’s not go­ing to oc­cur to you.

Other peo­ple seem to man­age to PM me when I’m out of line (won’t name names here). But that’s gen­er­ally be­cause they’re ac­tu­ally in­ter­ested in im­prov­ing my post­ing, not in grand­stand­ing.

• I see no “ac­cu­mu­lated list of [your] flaws” in what con­chis has posted here. I see some com­ments on what you said on this par­tic­u­lar oc­ca­sion; and I see, em­bed­ded in some­thing that (as you say you un­der­stood, and I’m sure you did) was de­liber­ately nasty in style in or­der to make a point, the claim that you’ve ex­hibited the same pathol­ogy el­se­where as is on dis­play here. No ac­cu­mu­lated list; a sin­gle flaw, and even that men­tioned only to point up the dis­tinc­tion be­tween crit­i­ciz­ing what some­one has writ­ten and crit­i­ciz­ing them per­son­ally.

Also: You’re be­ing need­lessly ob­nox­ious; please de­sist. I am say­ing this in pub­lic rather than by PM be­cause what I am try­ing to ac­com­plish is (some small amount of) dis­in­cen­tive for other peo­ple who might wish to be ob­nox­ious them­selves. I am in­ter­ested in im­prov­ing not only your post­ing but LW as a whole.

And, FWIW, so far as I can tell I have no rec­ol­lec­tion of your past be­havi­our on LW, and in par­tic­u­lar I am not say­ing this be­cause I “don’t like” you.

• I’m will­ing to apol­o­gise for pub­li­cly call­ing you out. While I’m still not to­tally con­vinced that PMing would have been op­ti­mal in this in­stance, it was a failing on my part not to have con­sid­ered it at all, and I’m cer­tainly sorry for any hurt I may have caused.

I’m also sorry that you seem to have such a poor im­pres­sion of me that you can’t think of any way to ex­plain my be­havi­our other than self-pro­mo­tion and grand­stand­ing. Not re­ally big on ar­gu­men­ta­tive char­ity are you?

• Apol­ogy ac­cepted! :-)

I apol­o­gize for load­ing up on the nega­tive mo­tives I at­tributed to you. I ap­pre­ci­ate your feed­back, I would just pre­fer it not be done in a way that makes a spec­ta­cle of it all.

• Apol­ogy like­wise ac­cepted! ;)

• He cites “Goedel, Escher, Bach”, in which Hofs­tadter makes the same ar­gu­ment. Hofs­tadter doesn’t ap­ply it to the silly why-we-aren’t-ma­chines ar­gu­ment, though. (And Drescher doesn’t ac­tu­ally say that a Goedel sen­tence isn’t true, just that we can’t re­ally know it’s true.)

• An in­finitely long proof is not a proof, since proofs are finite by defi­ni­tion.

The truth value of a state­ment does not de­pend on the ex­is­tence of a proof any­ways, the defi­ni­tion of truth is that it holds in any model. It is just a corol­lary of Goedel’s com­plete­ness the­o­rem that syn­tac­tic truth (ex­is­tence of a (finite) proof) co­in­cides with se­man­tic truth if the ax­iom sys­tem satis­fies cer­tain as­sump­tions.

• With that defi­ni­tion of truth, a Goedel sen­tence is not “true”, be­cause there are mod­els in which it fails to hold; nei­ther is its nega­tion “true”, be­cause there are mod­els in which it does. But that’s not the only way in which the word “true” is used about math­e­mat­i­cal state­ments (though per­haps it should be); many peo­ple are quite sure that (e.g.) a Goedel sen­tence for their favourite for­mal­iza­tion of ar­ith­metic is ei­ther true or false (and by the lat­ter they mean not-true). There’s plenty of rea­son to be skep­ti­cal about the sort of Pla­ton­ism that would guaran­tee that ev­ery state­ment in the lan­guage of (say) Prin­cipia Math­e­mat­ica or ZF is “re­ally” true or false, but it hardly seems rea­son­able to de­clare it wrong by defi­ni­tion as you’re do­ing here.

• many peo­ple are quite sure that (e.g.) a Goedel sen­tence for their favourite for­mal­iza­tion of ar­ith­metic is ei­ther true or false (and by the lat­ter they mean not-true).

Those peo­ple seem a bit silly, then. If you say “The Godel sen­tence (G) is true of the small­est model (i.e. the stan­dard model) of first-or­der Peano Arith­metic (PA)” then this truth fol­lows from G be­ing un­prov­able: if there were a proof of G in the small­est model, there would be a proof of G in all mod­els, and if there were a proof of G in all mod­els, then by Godel’s com­plete­ness the­o­rem G would be prov­able in PA. To in­sist that the Godel sen­tence is true in PA—that it is true wher­ever the ax­ioms of PA are true—rather than be­ing only “true in the small­est model of PA”—is just fac­tu­ally wrong, flat wrong as math.

Also, you’re as­sum­ing the con­sis­tency of PA.

• The peo­ple I’m think­ing of—I was one of them, once—would not say ei­ther “G is true in PA” or “G is true in such-and-such a model of PA”. They would say, sim­ply, “G is true”, and by that they would mean that what G says about the nat­u­ral num­bers is true about the nat­u­ral num­bers—you know, the ac­tual, real, nat­u­ral num­bers. And they would re­act with some im­pa­tience to the idea that “the ac­tual, real, nat­u­ral num­bers” might not be a clearly defined no­tion, or that state­ments about them might not have a well-defined truth value in the real world.

In other words, Pla­ton­ists.

• I think most peo­ple who know Goedel’s the­o­rem say “G is true” and are “un­re­flec­tive pla­ton­ists,” by which I mean that they act like the nat­u­ral num­bers re­ally ex­ist, etc, but if you pushed them on it, they’d ad­mit the doubt of your last cou­ple of sen­tences.

Similarly, most peo­ple (eg, ev­ery­one on this thread), state Goedel’s com­plete­ness the­o­rem pla­ton­i­cally: a state­ment is prov­able if it is true in ev­ery model. That doesn’t make sense with­out mod­els hav­ing some pla­tonic ex­is­tence. (yes, you can talk about in­ter­nal mod­els, but peo­ple don’t.) I sup­pose you could take the pla­tonic po­si­tion that all mod­els ex­ist with­out be­liev­ing that it is pos­si­ble to sin­gle out the spe­cial model. (Eliezer referred to “the min­i­mal model”; does that work?)

• You are right: you may come up with an­other con­sis­tent way of defin­ing truth.

How­ever, my com­ment was a re­ac­tion to silas’s com­ment, in which he seemed to con­fuse the no­tion syn­tac­tic and se­man­tic truth, tak­ing prov­abil­ity as the pri­mary crite­rion. I just pointed out that even un­der­grad­u­ate logic courses treat se­man­tic truth as ba­sis and syn­tac­tic truth en­ters the pic­ture as a con­se­quence.

• You pro­pose the neato idea to use frac­tional truth val­ues to deal with state­ments like “this is tall”, and boost it with a way to ad­just such truth val­ues as height varies. Some­how you missed that we already have a way to han­dle such gra­da­tions; it’s called “units of mea­sure­ment”.

Units of mea­sure­ment don’t work nearly as well when deal­ing with things such as beauty in­stead of length.

• Then nei­ther does fuzzy logic.

• I think an im­por­tant dis­tinc­tion be­tween units of mea­sure­ment and fuzzy logic is that units of mea­sure­ment must per­tain to things that are mea­surable, and they must be ob­jec­tively defined, so that if two peo­ple ex­press the same thing us­ing units of mea­sure­ment, their mea­sure­ments will be the same. I see no rea­son that fuzzy logic shouldn’t be ap­pli­ca­ble to things that are sim­ply a per­son’s im­pres­sion of some­thing.

Or per­haps it would be perfectly rea­son­able to re­lax the re­quire­ment that units of mea­sure­ment be as ob­jec­tive as they are in prac­tice. If He­len of Troy was N stan­dards of de­vi­a­tion above the norm in beauty (trivia: N is about 6), we can de­clare the he­len equal to N stan­dards of de­vi­a­tion in beauty, and then agents ca­pa­ble of hav­ing an im­pres­sion of beauty could look at ran­dom sam­ples of peo­ple and say how beau­tiful they are in mil­lihe­lens.

If there’s a bet­ter way of rep­re­sent­ing sub­jec­tive true­ness than real num­bers be­tween 0 and 1, I imag­ine lots of peo­ple would be in­ter­ested in hear­ing it.

• Or per­haps it would be perfectly rea­son­able to re­lax the re­quire­ment that units of mea­sure­ment be as ob­jec­tive as they are in prac­tice. If He­len of Troy was N stan­dards of de­vi­a­tion above the norm in beauty (trivia: N is about 6), we can de­clare the he­len equal to N stan­dards of de­vi­a­tion in beauty, and then agents ca­pa­ble of hav­ing an im­pres­sion of beauty could look at ran­dom sam­ples of peo­ple and say how beau­tiful they are in mil­lihe­lens.

That’s still cre­at­ing a unit of mea­sure­ment, it just uses pro­to­cols that prime it with re­spect to one per­son rather than a phys­i­cal ob­ject. It doesn’t re­quire a con­cept of frac­tional truth, just reg­u­lar old mea­sure­ment, prob­a­bil­ity and­in­ter­po­la­tion.

Why don’t you spend some time more pre­cisely de­vel­op­ing the for­mal­ism… oh, wait

how can this be treated for­mally? I say, to heck with it.

That’s why.

• I don’t think it’s fair to de­mand a full ex­pla­na­tion of a topic that’s been around for over two decades (though a link to an on­line treat­ment would have been nice). War­ri­gal didn’t ‘come up with’ frac­tional val­ues for truth. It’s a con­cept that’s been around (cen­tral?) in Eastern philos­o­phy for cen­turies if not mil­le­nia, but was more-or-less ex­iled from Western philos­o­phy by Aris­to­tle’s Law of the Ex­cluded Mid­dle.

Fuzzy logic has proven it­self very use­ful in con­trol sys­tems and in AI, be­cause it matches the way peo­ple think about the world. Take Hem­ing­way’s Challenge to “write one true [fac­tual] sen­tence” (for which you would then need to show 100% ex­act cor­re­spon­dence of words to molecules in all rele­vant situ­a­tions) and one’s per­spec­tive can change to see all facts as only par­tially true. ie, with a truth value in [0,1].

The state­ment “snow is white” is true if and only if snow is white, but you still have to define “snow” and “white”. How far from 100% even re­flec­tion of the en­tire visi­ble spec­trum can you go be­fore “white” be­comes “off-white”? How much can snow melt be­fore it be­comes “slush”? How much dis­solved salt can it con­tain be­fore it’s no longer “snow”? Is it still “snow” if it con­tains pur­ple food colour­ing?

The same anal­y­sis of most con­cepts re­veals we in­her­ently think in fuzzy terms. (This is why court cases take so damn long to pick be­tween the bi­nary val­ues of “guilty” and “not guilty”, when the an­swer is al­most always “par­tially guilty”.) In fuzzy sys­tems, con­cepts like “adult” (age of con­sent), “al­ive” (cry­on­ics), “per­son” (abor­tion), all be­come scalar vari­ables defined over n di­men­sions (usu­ally n=1) when they are fed into the equa­tions, and the re­sults are trans­lated back into a sin­gle value post-com­pu­ta­tion. The more usual con­trol sys­tem vari­ables are things like “hot”, “closed”, “wet”, “bright”, “fast”, etc., which make the sys­tem eas­ier to un­der­stand and pro­gram than con­tin­u­ous mea­sure­ments.

Bart Kosko’s book on the topic is Fuzzy Think­ing. He makes some big claims about prob­a­bil­ity, but he says it boils down to fuzzy logic be­ing just a differ­ent way of think­ing about the same un­der­ly­ing math. (I don’t know if this gels with the dis­cus­sion of ‘truth func­tion­al­ism’ above) How­ever, this prompts pat­terns of thought that would not oth­er­wise make sense, which can lead to novel and use­ful re­sults.

• I voted up your post for its con­clu­sions, but would re­quest that you make them a bit friendlier in the fu­ture...