# How An Algorithm Feels From Inside

“If a tree falls in the for­est, and no one hears it, does it make a sound?” I re­mem­ber see­ing an ac­tual ar­gu­ment get started on this sub­ject—a fully naive ar­gu­ment that went nowhere near Berkeleyan sub­jec­tivism. Just:

“It makes a sound, just like any other fal­ling tree!”
″But how can there be a sound that no one hears?”

The stan­dard ra­tio­nal­ist view would be that the first per­son is speak­ing as if “sound” means acous­tic vibra­tions in the air; the sec­ond per­son is speak­ing as if “sound” means an au­di­tory ex­pe­rience in a brain. If you ask “Are there acous­tic vibra­tions?” or “Are there au­di­tory ex­pe­riences?”, the an­swer is at once ob­vi­ous. And so the ar­gu­ment is re­ally about the defi­ni­tion of the word “sound”.

I think the stan­dard anal­y­sis is es­sen­tially cor­rect. So let’s ac­cept that as a premise, and ask: Why do peo­ple get into such an ar­gu­ment? What’s the un­der­ly­ing psy­chol­ogy?

A key idea of the heuris­tics and bi­ases pro­gram is that mis­takes are of­ten more re­veal­ing of cog­ni­tion than cor­rect an­swers. Get­ting into a heated dis­pute about whether, if a tree falls in a de­serted for­est, it makes a sound, is tra­di­tion­ally con­sid­ered a mis­take.

So what kind of mind de­sign cor­re­sponds to that er­ror?

In Dis­guised Queries I in­tro­duced the blegg/​rube clas­sifi­ca­tion task, in which Su­san the Se­nior Sorter ex­plains that your job is to sort ob­jects com­ing off a con­veyor belt, putting the blue eggs or “bleggs” into one bin, and the red cubes or “rubes” into the rube bin. This, it turns out, is be­cause bleggs con­tain small nuggets of vana­dium ore, and rubes con­tain small shreds of pal­la­dium, both of which are use­ful in­dus­tri­ally.

Ex­cept that around 2% of blue egg-shaped ob­jects con­tain pal­la­dium in­stead. So if you find a blue egg-shaped thing that con­tains pal­la­dium, should you call it a “rube” in­stead? You’re go­ing to put it in the rube bin—why not call it a “rube”?

But when you switch off the light, nearly all bleggs glow faintly in the dark. And blue egg-shaped ob­jects that con­tain pal­la­dium are just as likely to glow in the dark as any other blue egg-shaped ob­ject.

So if you find a blue egg-shaped ob­ject that con­tains pal­la­dium, and you ask “Is it a blegg?”, the an­swer de­pends on what you have to do with the an­swer: If you ask “Which bin does the ob­ject go in?”, then you choose as if the ob­ject is a rube. But if you ask “If I turn off the light, will it glow?”, you pre­dict as if the ob­ject is a blegg. In one case, the ques­tion “Is it a blegg?” stands in for the dis­guised query, “Which bin does it go in?”. In the other case, the ques­tion “Is it a blegg?” stands in for the dis­guised query, “Will it glow in the dark?”

Now sup­pose that you have an ob­ject that is blue and egg-shaped and con­tains pal­la­dium; and you have already ob­served that it is furred, flex­ible, opaque, and glows in the dark.

This an­swers ev­ery query, ob­serves ev­ery ob­serv­able in­tro­duced. There’s noth­ing left for a dis­guised query to stand for.

So why might some­one feel an im­pulse to go on ar­gu­ing whether the ob­ject is re­ally a blegg?

This di­a­gram from Neu­ral Cat­e­gories shows two differ­ent neu­ral net­works that might be used to an­swer ques­tions about bleggs and rubes. Net­work 1 has a num­ber of dis­ad­van­tages—such as po­ten­tially os­cillat­ing/​chaotic be­hav­ior, or re­quiring O(N2) con­nec­tions—but Net­work 1′s struc­ture does have one ma­jor ad­van­tage over Net­work 2: Every unit in the net­work cor­re­sponds to a testable query. If you ob­serve ev­ery ob­serv­able, clamp­ing ev­ery value, there are no units in the net­work left over.

Net­work 2, how­ever, is a far bet­ter can­di­date for be­ing some­thing vaguely like how the hu­man brain works: It’s fast, cheap, scal­able—and has an ex­tra dan­gling unit in the cen­ter, whose ac­ti­va­tion can still vary, even af­ter we’ve ob­served ev­ery sin­gle one of the sur­round­ing nodes.

Which is to say that even af­ter you know whether an ob­ject is blue or red, egg or cube, furred or smooth, bright or dark, and whether it con­tains vana­dium or pal­la­dium, it feels like there’s a lef­tover, unan­swered ques­tion: But is it re­ally a blegg?

Usu­ally, in our daily ex­pe­rience, acous­tic vibra­tions and au­di­tory ex­pe­rience go to­gether. But a tree fal­ling in a de­serted for­est un­bun­dles this com­mon as­so­ci­a­tion. And even af­ter you know that the fal­ling tree cre­ates acous­tic vibra­tions but not au­di­tory ex­pe­rience, it feels like there’s a lef­tover ques­tion: Did it make a sound?

We know where Pluto is, and where it’s go­ing; we know Pluto’s shape, and Pluto’s mass—but is it a planet?

Now re­mem­ber: When you look at Net­work 2, as I’ve laid it out here, you’re see­ing the al­gorithm from the out­side. Peo­ple don’t think to them­selves, “Should the cen­tral unit fire, or not?” any more than you think “Should neu­ron #12,234,320,242 in my vi­sual cor­tex fire, or not?”

It takes a de­liber­ate effort to vi­su­al­ize your brain from the out­side—and then you still don’t see your ac­tual brain; you imag­ine what you think is there, hope­fully based on sci­ence, but re­gard­less, you don’t have any di­rect ac­cess to neu­ral net­work struc­tures from in­tro­spec­tion. That’s why the an­cient Greeks didn’t in­vent com­pu­ta­tional neu­ro­science.

When you look at Net­work 2, you are see­ing from the out­side; but the way that neu­ral net­work struc­ture feels from the in­side, if you your­self are a brain run­ning that al­gorithm, is that even af­ter you know ev­ery char­ac­ter­is­tic of the ob­ject, you still find your­self won­der­ing: “But is it a blegg, or not?”

This is a great gap to cross, and I’ve seen it stop peo­ple in their tracks. Be­cause we don’t in­stinc­tively see our in­tu­itions as “in­tu­itions”, we just see them as the world. When you look at a green cup, you don’t think of your­self as see­ing a pic­ture re­con­structed in your vi­sual cor­tex—al­though that is what you are see­ing—you just see a green cup. You think, “Why, look, this cup is green,” not, “The pic­ture in my vi­sual cor­tex of this cup is green.”

And in the same way, when peo­ple ar­gue over whether the fal­ling tree makes a sound, or whether Pluto is a planet, they don’t see them­selves as ar­gu­ing over whether a cat­e­go­riza­tion should be ac­tive in their neu­ral net­works. It seems like ei­ther the tree makes a sound, or not.

We know where Pluto is, and where it’s go­ing; we know Pluto’s shape, and Pluto’s mass—but is it a planet? And yes, there were peo­ple who said this was a fight over defi­ni­tions—but even that is a Net­work 2 sort of per­spec­tive, be­cause you’re ar­gu­ing about how the cen­tral unit ought to be wired up. If you were a mind con­structed along the lines of Net­work 1, you wouldn’t say “It de­pends on how you define ‘planet’,” you would just say, “Given that we know Pluto’s or­bit and shape and mass, there is no ques­tion left to ask.” Or, rather, that’s how it would feel—it would feel like there was no ques­tion left—if you were a mind con­structed along the lines of Net­work 1.

Be­fore you can ques­tion your in­tu­itions, you have to re­al­ize that what your mind’s eye is look­ing at is an in­tu­ition—some cog­ni­tive al­gorithm, as seen from the in­side—rather than a di­rect per­cep­tion of the Way Things Really Are.

Peo­ple cling to their in­tu­itions, I think, not so much be­cause they be­lieve their cog­ni­tive al­gorithms are perfectly re­li­able, but be­cause they can’t see their in­tu­itions as the way their cog­ni­tive al­gorithms hap­pen to look from the in­side.

And so ev­ery­thing you try to say about how the na­tive cog­ni­tive al­gorithm goes astray, ends up be­ing con­trasted to their di­rect per­cep­tion of the Way Things Really Are—and dis­carded as ob­vi­ously wrong.

• While “reify­ing the in­ter­nal nodes” must in­deed be counted as one of the great de­sign flaws of the hu­man brain, I think the recog­ni­tion of this flaw and the at­tempt to fight it are as old as his­tory. How many jokes, folk say­ings, liter­ary quo­ta­tions, etc. are based around this one flaw? “in name only,” “looks like a duck, quacks like a duck,” “by their fruits shall ye know them,” “a rose by any other name”… Of course, there wouldn’t be all these say­ings if peo­ple didn’t keep con­fus­ing la­bels with ob­serv­able at­tributes in the first place—but don’t the say­ings sug­gest that rec­og­niz­ing this bug in one­self or oth­ers doesn’t re­quire any neu­ral-level un­der­stand­ing of cog­ni­tion?

• Ex­actly. Peo­ple merely need to keep in mind that words are not the con­cepts they rep­re­sent. This is cer­tainly not im­pos­si­ble, but—like all as­pects of be­ing ra­tio­nal—it’s harder than it sounds.

• I think it goes be­yond words.

Real­ity does not con­sist of con­cepts, re­al­ity is sim­ply re­al­ity. Con­cepts are how we de­scribe re­al­ity. They are like words squared, and have all the same prob­lems as words.

• Look­ing back from a year later, I should have said, “Words are not the ex­pe­riences they rep­re­sent.”

As for “re­al­ity,” well it’s just a name I give to a cer­tain set of sen­sa­tions I ex­pe­rience. I don’t even know what “con­cepts” are any­more—prob­a­bly just a gen­eral name for a bunch of differ­ent things, so not that use­ful at this level of anal­y­sis.

• Ayn Rand defined this for ev­ery­one in her book “In­tro­duc­tion to Ob­jec­tivist Episte­mol­ogy”. For­ma­tion of con­cepts is dis­cussed in de­tail there.

Ex­is­tence ex­ists; Only ex­is­tence ex­ists. We ex­ist with a con­scious­ness: Ex­is­tence is iden­tity: Iden­ti­fi­ca­tion is con­scious­ness.

Con­cepts are the units of Episte­mol­ogy. Con­cepts are the men­tal codes we use to iden­tify ex­is­tants. Con­cepts are the bridges be­tween meta­physics and Episte­mol­ogy. Con­cepts re­fer to the similar­i­ties of the units, with­out us­ing the mea­sure­ments.

Defi­ni­tions are ab­bre­vi­a­tions of iden­ti­fi­ca­tion. The ac­tual defi­ni­tions are the ex­is­tants them­selves.

Lan­guage is a ver­bal code which uses con­cepts as units. Writ­ten lan­guage ex­plains how to speak the phonemes.

Lan­guage refers to re­mem­bered ex­pe­riences, and uses the con­cepts which are as­so­ci­ated (re­mem­bered) with the units of ex­pe­rience as units.

Us­ing lan­guage is ba­si­cally re­port­ing your in­ner ex­pe­riences us­ing con­cepts as units.

The pro­cess in­cludes, ob­serv­ing, en­cod­ing, by the speaker. En­cod­ing, Speak­ing (trans­mit­ting) …. re­ceiv­ing, hear­ing, de­cod­ing the gen­eral ideas, con­tex­tu­al­iz­ing, in­te­grat­ing into the full world model of the listener. Fi­nally the listener will be able to re­spond from his up­dated world model us­ing the same pro­cess as the origi­nal speaker.

This pro­cess is rife with op­por­tu­ni­ties for mis- un­der­stand­ing. How­ever the illu­sion of un­der­stand­ing is what we are left with.

This is gen­er­ally not known or un­der­stood.

The only solu­tion is co­pi­ous di­a­log, to con­firm that what was in­tended is that which was un­der­stood.

Com­ments?

• Ex­is­tence ex­ists; Only ex­is­tence ex­ists. We ex­ist with a con­scious­ness: Ex­is­tence is iden­tity: Iden­ti­fi­ca­tion is con­scious­ness.

This seems like a tremen­dously un­helpful at­tempt at defi­ni­tion, and it doesn’t re­ally get bet­ter from there. It seems as if it’s writ­ten more to op­ti­mize for sound­ing Deep than for mak­ing any con­cepts un­der­stand­able to peo­ple who don’t already grasp them.

The only solu­tion is co­pi­ous di­a­log, to con­firm that what was in­tended is that which was un­der­stood.

The nec­es­sary amounts of di­alogue are a great deal less co­pi­ous if one does a good job be­ing clear in the first place.

• This seems like a tremen­dously un­helpful at­tempt at defi­ni­tion, and it doesn’t re­ally get bet­ter from there. It >seems as if it’s writ­ten more to op­ti­mize for sound­ing Deep than for mak­ing any con­cepts un­der­stand­able to >peo­ple who don’t already grasp them.

There prob­a­bly isn’t any one sin­gle way of defin­ing this in a way that is un­der­stand­able by ev­ery­one. That be­ing said, be­ing able to make the dis­tinc­tion be­tween di­rect ex­pe­rience and con­cepts is very use­ful and episte­mol­ogy has helped many peo­ple with this, so I’d say there is value in it.

• One thing I learned is to never ar­gue with a Ran­dian.

• How much of the Se­quences have you read? In par­tic­u­lar, have you read 37 Ways That Words Can Be Wrong?

• As a former Ob­jec­tivist, I un­der­stand the point be­ing made.

That said, I no longer agree… I now be­lieve that Ayn Rand made an ax­iom-level mis­take. Ex­is­tence is not Iden­tity. To as­sume that Ex­is­tence is Iden­tity is to as­sume that all things have con­crete prop­er­ties, which ex­ist and can there­fore be dis­cov­ered. This is demon­stra­bly false; at the fun­da­men­tal level of re­al­ity, there is Uncer­tainty. Quan­tum-level effects in­her­ent in ex­is­tence pre­clude the pos­si­bil­ity of ab­solute knowl­edge of all things; there are parts of re­al­ity which are ac­tu­ally un­know­able.

More­over, we as hu­mans do not have ab­solute knowl­edge of things. Our knowl­edge is limited, as is the in­for­ma­tion we’re able to gather about re­al­ity. We don’t have the abil­ity to gather all rele­vant in­for­ma­tion to be cer­tain of any­thing, nor the lux­ury to post­pone de­ci­sion-mak­ing while we gather that in­for­ma­tion. We need to make de­ci­sions sooner then that, and we need to make them in the face of the knowl­edge that our knowl­edge will always be im­perfect.

Ac­cord­ingly, I find that a bet­ter ax­iom would be “Ex­is­tence is Prob­a­bil­ity”. I’m not a good enough philoso­pher to fully ex­trap­o­late the con­se­quences of that… but I do think if Ayn Rand had started with a root-level ac­knowl­edge­ment of fal­li­bil­ity, it would’ve helped to avoid a lot of the prob­lems she wound up fal­ling into later on.

Also, wel­come, new per­son!

• Ex­is­tence is fre­quently defined in terms of iden­tity. ‘ex­ists(a)’ ≝ ‘∃x(a=x)’

To as­sume that Ex­is­tence is Iden­tity is to as­sume that all things have con­crete prop­er­ties, which ex­ist and can there­fore be dis­cov­ered. This is demon­stra­bly false; at the fun­da­men­tal level of re­al­ity, there is Uncer­tainty.

Only if you’re an Ob­jec­tive Col­lapse the­o­rist of some stripe. If you ac­cept any­thing in the vicinity of Many Wor­lds or Hid­den Vari­ables, then na­ture is not ul­ti­mately so an­thro­pocen­tric; all of its prop­er­ties are de­ter­mi­nate, though those prop­er­ties may not be ex­actly what you ex­pect from ev­ery­day life.

Quan­tum-level effects in­her­ent in ex­is­tence pre­clude the pos­si­bil­ity of ab­solute knowl­edge of all things; there are parts of re­al­ity which are ac­tu­ally un­know­able.

If “there are” such parts, then they ex­ist. The mis­take here is not to as­so­ci­ate ex­is­tence with iden­tity, but to as­so­ci­ate ex­is­tence or iden­tity with dis­cov­er­abil­ity; lots of things are real and out there and ob­jec­tive but are phys­i­cally im­pos­si­ble for us to in­ter­act with. You’re suc­cumb­ing to a bit of Rand’s word­play: She leaps back and forth be­tween the words ‘iden­tity’ and ‘iden­ti­fi­ca­tion’, as though these were closely re­lated con­cepts. That’s what al­lows her to as­so­ci­ate ex­is­tence with con­scious­ness—through mere word­play.

Ac­cord­ingly, I find that a bet­ter ax­iom would be “Ex­is­tence is Prob­a­bil­ity”.

But that ax­iom isn’t true. I like my ax­ioms to be true. Prob­a­bil­ity is in the head, un­like ex­is­tent things like teacups and cacti.

• Ex­is­tence is fre­quently defined in terms of iden­tity. ‘ex­ists(a)’ ≝ ‘∃x(a=x)’

Isn’t that just kick­ing the can down the road? What does it mean for an x to ∃, “there is an x such that …”, there we go with the “is”, with the “be” with the “ex­ist”.

• Rob­bBB, in my ex­pe­rience, tends to give pseudo-pre­cise an­swers like that. It seems like a do­main con­fu­sion. You are ask­ing about ob­serv­able re­al­ity, he talks about math­e­mat­i­cal defi­ni­tions.

• I’m not a fre­quent poster here, and I don’t ex­pect my recom­men­da­tions carry much weight. But I have been read­ing this site for a few years, and offline I deal with LWish top­ics and dis­cus­sions pretty reg­u­larly, es­pe­cially with the more philo­soph­i­cal stuff.

All that said, I think Rob­bBB is one of the best posters LW has. Like top 10. He stands out for clar­ity, se­ri­ous­ness, and char­ity.

Also, I think you shouldn’t do that thing where you un­der­mine some other poster while avoid­ing di­rectly ad­dress­ing them or their ar­gu­ment.

• All that said, I think Rob­bBB is one of the best posters LW has. Like top 10. He stands out for clar­ity, se­ri­ous­ness, and char­ity.

It cer­tainly has not been my im­pres­sion. I found my dis­cus­sion with him about in­stru­men­tal­ism, here and on IRC, ex­tremely un­pro­duc­tive. Seems like a pat­tern with other philo­soph­i­cal types here. Maybe they don’t teach philoso­phers to listen, I don’t know. For com­par­i­son, TheOtherDave man­ages to carry a thought­ful, po­lite and in­sight­ful dis­cus­sion even when he dis­agrees. More reg­u­lars here could learn ra­tio­nal dis­course from him.

Or maybe I’m fal­ling prey to the Bright Dilet­tante trap and the ex­perts in the sub­ject mat­ter just don’t have the pa­tience to ex­plain things in a friendly and un­der­stand­able fash­ion. I’m not sure how to tell.

Also, I think you shouldn’t do that thing where you un­der­mine some other poster while avoid­ing di­rectly ad­dress­ing them or their ar­gu­ment.

I take back the “pseudo-” part. His an­swers were pre­cise, but from a wrong do­main.

• Seems like a pat­tern with other philo­soph­i­cal types here. Maybe they don’t teach philoso­phers to listen, I don’t know. For com­par­i­son, TheOtherDave man­ages to carry a thought­ful, po­lite and in­sight­ful dis­cus­sion even when he dis­agrees. More reg­u­lars here could learn ra­tio­nal dis­course from him.

Agree on both counts. I’ll sec­ond your ad­vo­cacy of a TheOtherDave as a post­ing style role model. In par­tic­u­lar he con­veys the im­pres­sion that he is far bet­ter than the av­er­age less­wrong par­ti­ci­pant at un­der­stand­ing what peo­ple are say­ing to him. (Rather than the all to com­mon prac­tice of pat­tern match­ing a few key­words to the near­est pos­si­ble stupid thing that can be re­futed.)

• Maybe they don’t teach philoso­phers to listen, I don’t know.

I can tell you from ex­pe­rience that ‘they’ don’t. Do you know who does teach this?

• I don’t know. Cer­tainly there is some em­pha­sis on char­i­ta­ble read­ing and steel­man­ning on this fo­rum, but the re­sults are mixed. Maybe it’s taught in psy­chol­ogy, nurs­ing and other ar­eas which re­quire em­pa­thy.

• This seems like some­thing a ra­tio­nal­ist course could prof­itably teach, es­pe­cially if there are no al­ter­na­tive ways to learn it be­sides in­for­mal prac­tice.

• I’m a lit­tle un­clear on what your crit­i­cism is. Is one of these right?

1. You’re be­ing too pre­cise, whereas I wanted to have an in­for­mal dis­cus­sion in terms of our ev­ery­day in­tu­itions. So defi­ni­tions are coun­ter­pro­duc­tive; a lit­tle unclar­ity in what we mean is ac­tu­ally helpful for this topic.

2. There are two kinds of ex­is­tence, one that holds for Plato’s Realm Of In­visi­ble Mathy Things and one that holds for The Phys­i­cal World. Your defi­ni­tions may be true of the Mathy Things, but they aren’t true of things like ap­ples and bum­ble­bees. So you’re com­mit­ting a cat­e­gory er­ror.

3. I wanted you to give me a re­ally rich, in­ter­est­ing ex­pla­na­tion of what ‘ex­is­tence’ is, in more fun­da­men­tal terms. But in­stead you just copy-pasted a bland un­in­for­ma­tive Stan­dard Math­e­mat­i­cal Lo­gi­cian An­swer from some old text­book. That makes me sad. Please be more in­ter­est­ing next time.

If your point was 1, I’ll want to hear more. If it was 3, then my apolo­gies! If it was 2, then I’ll have to dis­agree un­til I hear some ar­gu­ment as to why I should be­lieve in these in­visi­ble eter­nal num­ber-like things that ex­ist in their own unique num­ber-like-thing-spe­cific way. (And what it would mean to be­lieve in them!)

• Thank you, this frame­work helps. Definitely no to 1. Definitely yes to 2, with some cor­rec­tions. Yes to some parts of 3.

Re 2. First, let me adopt bounded re­al­ism here, with physics (ex­ter­nal re­al­ity or ter­ri­tory) + logic (hu­man mod­els of re­al­ity, or maps). Let me ig­nore the ul­tra­vi­o­let di­ver­gence of de­com­part­men­tal­iza­tion (hence “bounded”), where Many Words, Teg­mark IV and modal re­al­ism are con­sid­ered “ter­ri­tory”. To this end, let me put the UV cut­off on logic at the Pop­per’s bound­ary: only ex­per­i­men­tally falsifi­able maps are worth con­sid­er­ing. A map is “true” means that it is an ac­cu­rate rep­re­sen­ta­tion of the piece of ter­ri­tory it is in­tended to rep­re­sent. I apol­o­gize in ad­vance if I am in­vent­ing new terms for the stan­dard philo­soph­i­cal con­cepts—feel free to point me to the stan­dard ter­minol­ogy.

Again, “ac­cu­rate map”, a.k.a. “true map” is a map that has been tested against the ter­ri­tory and found re­li­able enough to use as a guide for fur­ther trav­els, at least if one does not stray too far. Cor­re­spond­ingly, a piece of ter­ri­tory is said to “ex­ist” if it is de­scribed by an ac­cu­rate map.

On the other hand, your “in­visi­ble mathy things” live in the world of maps. Some of them use the same term “true”, but in a differ­ent way: given a set of rules of how to form strings of sym­bols, true state­ments are well-formed finite strings. They also use the same term “ex­ist”, but also in a differ­ent way: given a set of rules, ev­ery well-formed string is said to “ex­ist”.

Now, I am not a math­e­mat­i­cian, so this may not be en­tirely ac­cu­rate, but the gist is that con­flat­ing “ex­ist” as ap­plied to the ter­ri­tory and “ex­ist” as ap­plied to maps is in­deed a cat­e­gory er­ror. When some­one talks about ex­is­tence of phys­i­cal ob­jects and you write out some­thing con­tain­ing the ex­is­ten­tial quan­tifier, you are talk­ing about a differ­ent cat­e­gory: not re­al­ity, but a sub­set of maps re­lated to math­e­mat­i­cal logic.

I am not sure whether this an­swers your ob­jec­tion that

why I should be­lieve in these in­visi­ble eter­nal num­ber-like things that ex­ist in their own unique num­ber-like-thing-spe­cific way. (And what it would mean to be­lieve in them!)

but I hope it makes it clear why I find your replies un­con­vinc­ing and gen­er­ally not use­ful.

• You’ve re­defined ‘x ex­ists’ to mean ‘x is de­scribed by a map that has been tested and so far has seemed re­li­able to us’, and ‘x is true’ cor­re­spond­ingly. One prob­lem with this is that it’s his­tor­i­cal: It com­mits us to say­ing ‘New­to­nian physics used to be true, but these days it’s false (i.e., not com­pletely re­li­able as a gen­eral the­ory)‘, and to say­ing ‘Phlo­gis­ton used to ex­ist, but then it stopped ex­ist­ing be­cause some­one over­turned phlo­gis­ton the­ory’. This is pretty strange.

Another prob­lem is that it’s not clear what it takes to be ‘found re­li­able enough to use as a guide for fur­ther trav­els’. Surely there’s an im­por­tant sense in which math is re­li­able in that sense, hence ‘true’ in the ter­ri­tory-ish sense you out­lined above, not just in the map-ish sense. So per­haps we’ll need a more pre­cise defi­ni­tion of ter­ri­tory-ish truth in or­der to clearly demon­strate why math isn’t in the ter­ri­tory, where the ter­ri­tory is defined by em­piri­cal ad­e­quacy.

I think your view, or one very close to yours, is ac­tu­ally a lot stronger (can be more eas­ily defended, has broader im­pli­ca­tions) than your ar­gu­ment for it sug­gests. You can sim­ply note that things like Ab­stract Num­bers, be­ing causally in­ert, couldn’t be re­spon­si­ble for the ‘un­rea­son­able effi­cacy of math­e­mat­ics’; so that effi­cacy can’t count as ev­i­dence for such Num­bers. And noth­ing else is ev­i­dence for Num­bers ei­ther. So we should con­clude, on grounds of par­si­mony (per­haps for­tified with anti-Teg­mark’s-MUH ar­gu­ments), that there are un­likely to be such Num­bers. At that point, we can make the prag­matic, merely lin­guis­tic de­ci­sion of say­ing that math­e­mat­i­ci­ans are us­ing ‘ex­ists’ in a looser, more figu­ra­tive sense.

Per­haps a few math­e­mat­i­ci­ans are de­luded into think­ing that ‘ex­ists’ means ex­actly the same thing in both con­texts, but it is more char­i­ta­ble to in­ter­pret math­e­mat­ics in gen­eral in the less on­tolog­i­cally com­mit­ting way, be­cause on the above ar­gu­ments a pla­ton­is­tic math­e­mat­ics would be lit­tle more than spec­u­la­tive the­ol­ogy. Ba­si­cally, we end up with a for­mal­ist or fic­tion­al­ist de­scrip­tion of math, which I think is very plau­si­ble.

You see, we aren’t so differ­ent, you and I. Not once we bracket whether un­ex­pe­rienced cu­cum­bers ex­ist out there, any­way!

• You’ve re­defined ‘x ex­ists’ to mean ‘x is de­scribed by a map that has been tested and so far has seemed re­li­able to us’, and ‘x is true’ cor­re­spond­ingly.

I dis­agree that this is a re­defi­ni­tion. You be­lieve that elephants ex­ists be­cause you can go and see them, or talk to some­one you trust who saw them, etc. You be­lieve that live T-Rex (al­most surely) does not ex­ist be­cause it went ex­tinct some 60 odd mil­lion years ago. Both be­liefs can be up­dated based on new in­for­ma­tion.

’New­to­nian physics used to be true, but these days it’s false

That’s not at all what I am say­ing. Con­sider re­sist­ing your ten­dency to straw­man. New­to­nian physics is still true in its do­main of ap­pli­ca­bil­ity, it has never been true where it’s not been ap­pli­ca­ble, though peo­ple didn’t know this un­til 1905.

‘Phlo­gis­ton used to ex­ist, but then it stopped ex­ist­ing be­cause some­one over­turned phlo­gis­ton the­ory’

Again, a be­lief at the time was that it ex­isted, a more ac­cu­rate be­lief (map) su­per­seded the old one and now we know that phlo­gis­ton never ex­isted. Maps thought of as be­ing re­li­able can be found want­ing all the time, so the ter­ri­tory they de­scribe is no longer be­lieved to ex­ist, not stopped ex­ist­ing. This is pretty un­con­tro­ver­sial, I would think. Science didn’t kill gnomes and fairies, and such. At least this is the ex­per­i­ment-bounded re­al­ist po­si­tion, as far as I un­der­stand it.

You can sim­ply note that things like Ab­stract Num­bers, be­ing causally in­ert, couldn’t be re­spon­si­ble for the ‘un­rea­son­able effi­cacy of math­e­mat­ics’; so that effi­cacy can’t count as ev­i­dence for such Num­bers.

I can’t even parse that, sorry. Num­bers don’t phys­i­cally ex­ist be­cause they are ideas, and as such be­long in the realm of logic, not physics. (Again, I’m wear­ing a re­al­ist hat here.) I don’t think par­si­mony is re­quired here. It’s a pos­tu­late, not a con­clu­sion.

Per­haps a few math­e­mat­i­ci­ans are de­luded into think­ing that ‘ex­ists’ means ex­actly the same thing in both con­texts, but it is more char­i­ta­ble to in­ter­pret math­e­mat­ics in gen­eral in the less on­tolog­i­cally com­mit­ting way

Then I don’t un­der­stand why you re­ply to ques­tions of phys­i­cal ex­is­tence with some math­e­mat­i­cal ex­pres­sions...

You see, we aren’t so differ­ent, you and I. Not once we bracket whether un­ex­pe­rienced cu­cum­bers ex­ist out there, any­way!

I’m not nearly as op­ti­mistic.

• I dis­agree that this is a re­defi­ni­tion. You be­lieve that elephants ex­ists be­cause you can go and see them, or talk to some­one you trust who saw them, etc.

Sure, but ‘you be­lieve in X be­cause of Y’ does not as a rule let us con­clude ‘X = Y’. I be­lieve in elephants be­cause of how they’ve causally im­pacted my ex­pe­rience, but I don’t be­lieve that elephants are ex­pe­riences of mine, or log­i­cal con­structs out of my ex­pe­riences and pre­dic­tions. I be­lieve elephants are an­i­mals.

In­deed, a large part of the rea­son I be­lieve in elephants is that I think elephants would still ex­ist even had you sev­ered the causal links be­tween me and them and I’d never learned about them. The ter­ri­tory doesn’t go away when you stop know­ing about it, or even when you stop be­ing able to ever know about it. If you shot an elephant in a rocket out of the ob­serv­able uni­verse, it wouldn’t stop ex­ist­ing, and I wouldn’t be­lieve it had blinked out of ex­is­tence or that ques­tions re­gard­ing its ex­is­tence were mean­ingless, once its fu­ture state ceased to be know­able to me.

Elephants don’t live in my map. But they also don’t live in my map-ter­ri­tory re­la­tion. Nor do they live in a func­tion from ob­ser­va­tional data to hy­pothe­ses-that-help-us-build-rock­ets-and-iPhones-and-vac­cines. They sim­ply and purely live in the ter­ri­tory.

That’s not at all what I am say­ing. Con­sider re­sist­ing your ten­dency to straw­man.

I’m not try­ing to straw­man you, I’m sug­gest­ing a prob­lem for how you stated you view so that you can re­for­mu­late your view in a way that I’ll bet­ter un­der­stand. I’m sorry if I wasn’t clear about that!

New­to­nian physics is still true in its do­main of ap­pli­ca­bil­ity, it has never been true where it’s not been ap­pli­ca­ble, though peo­ple didn’t know this un­til 1905.

Right. But you said “‘ac­cu­rate map’, a.k.a. ‘true map’ is a map that has been tested against the ter­ri­tory and found re­li­able enough to use as a guide for fur­ther trav­els”. My ob­jec­tion is that wide-ap­pli­ca­bil­ity New­to­nian Physics used to meet your crite­rion for truth (i.e., for a long time it passed all ex­per­i­men­tal tests and re­mained re­li­able for fur­ther re­search), but even­tu­ally stopped meet­ing it. Which sug­gests that it was true un­til it failed a test, or un­til it ceased to be a use­ful guide to fur­ther re­search; af­ter that it be­came false. If you didn’t mean to sug­gest that, then I’m not sure I un­der­stand “map that has been tested against the ter­ri­tory and found re­li­able enough to use as a guide for fur­ther trav­els” any­more, which means I don’t know what you mean by “truth” and “ac­cu­racy” at this point.

Per­haps in­stead of defin­ing “true” as “has been tested against the ter­ri­tory and found re­li­able enough to use as a guide for fur­ther trav­els”, what you meant to say was “has been tested against the ter­ri­tory and will always be found re­li­able enough to use as a guide for fur­ther trav­els”? That way var­i­ous the­o­ries that had passed all tests at the time but are go­ing to even­tu­ally fail them won’t count as ever hav­ing been ‘true’.

Num­bers don’t phys­i­cally ex­ist be­cause they are ideas, and as such be­long in the realm of logic, not physics. (Again, I’m wear­ing a re­al­ist hat here.) I don’t think par­si­mony is re­quired here. It’s a pos­tu­late, not a con­clu­sion.

Pos­tu­lates like ‘1 is non­phys­i­cal’, ‘2 is non­phys­i­cal’, etc. aren’t needed here; that would make our ax­iom set ex­traor­di­nar­ily clut­tered! The very idea that ‘ideas’ aren’t a part of the phys­i­cal world is in no way ob­vi­ous at the out­set, much less ax­io­matic. There was a time when light­ning seemed su­per­nat­u­ral, a vi­o­la­tion of the nat­u­ral or­der; con­ceiv­ably, we could have dis­cov­ered that there isn’t re­ally light­ning (it’s some sort of illu­sion), but in­stead we dis­cov­ered that it re­duced to a phys­i­cal pro­cess. Men­tal con­tents are like light­ning. There may be an­other ver­sion of ‘idea’ or ‘thought’ or ‘ab­strac­tion’ that we can treat as a for­mal­ist sym­bol game or a use­ful fic­tion, but we still have to also ei­ther re­duce or elimi­nate the nat­u­ral-phe­nomenon-con­cept of ab­stract ob­jects if we wish to ad­vance the Great Re­duc­tion­ist Pro­ject.

It sounds like you want to elimi­nate them, and in­deed stop even talk­ing about them be­cause they’re silly. I can get be­hind that, but only if we’re care­ful not to for­get that not all math­e­mat­i­ci­ans (etc.) agree on this point, and don’t equiv­o­cate be­tween the two no­tions of ‘ab­stract’ (for­mal/​fic­tive vs. spooky and meta­phys­i­cal and Teg­mark­ish).

Then I don’t un­der­stand why you re­ply to ques­tions of phys­i­cal ex­is­tence with some math­e­mat­i­cal ex­pres­sions...

Only be­cause the ap­ples are be­hav­ing like num­bers whether you be­lieve in num­bers or not. You might not think our world does re­sem­ble the for­mal­ism in this re­spect, but that’s not ob­vi­ous to ev­ery­one be­fore we’ve talked the ques­tion over. A logic can be treated as a reg­i­men­ta­tion of nat­u­ral lan­guage, or as an in­de­pen­dent math­e­mat­i­cal struc­ture that hap­pens to struc­turally re­sem­ble a lot of our in­for­mal rea­son­ing and nat­u­ral-lan­guage rules. Either way, in­for­ma­tion we get from log­i­cal anal­y­sis and de­duc­tion can tell us plenty about the phys­i­cal world.

• Re 2. First, let me adopt bounded re­al­ism here, with physics (ex­ter­nal re­al­ity or ter­ri­tory) + logic (hu­man mod­els of re­al­ity, or maps). Let me ig­nore the ul­tra­vi­o­let di­ver­gence of de­com­part­men­tal­iza­tion (hence “bounded”), where Many Words, Teg­mark IV and modal re­al­ism are con­sid­ered “ter­ri­tory”. To this end, let me put the UV cut­off on logic at the Pop­per’s bound­ary: only ex­per­i­men­tally falsifi­able maps are worth con­sid­er­ing. A map is “true” means that it is an ac­cu­rate rep­re­sen­ta­tion of the piece of ter­ri­tory it is in­tended to rep­re­sent. I apol­o­gize in ad­vance if I am in­vent­ing new terms for the stan­dard philo­soph­i­cal con­cepts—feel free to point me to the stan­dard ter­minol­ogy.

I sus­pect you have, in fact, rein­vented some­thing. For refer­ence, how does this “bounded re­al­ism” eval­u­ate this state­ment:

On Au­gust 1st 2008 at mid­night Green­wich time, a one-foot sphere of choco­late cake spon­ta­neously formed in the cen­ter of the Sun; and then, in the nat­u­ral course of events, this Boltz­mann Cake al­most in­stantly dis­solved.

It makes no pre­dic­tions; this is, in a sense, epiphe­nom­e­nal cake—I know of no test we could perform that would dis­t­in­guish be­tween a world where this state­ment is false and one where it is true. Cer­tainly track­ing it pro­vides us with no pre­dic­tive power.

Yet is it some­how in­valid? Is it gib­ber­ish? Can it be re­jected a pri­ori? Is there any sense in which it might be true? Is there any sense in which it might be false?

Sorry if I’m mis­in­ter­pret­ing you here; I doubt this has much effect on your over­all point.

• How about this: Math­e­mat­i­ci­ans have a con­cep­tion of ex­is­tence which is good enough for do­ing math­e­mat­ics, but isn’t nec­es­sary cor­rect. When you give a math­e­mat­i­cal defi­ni­tion of ex­is­tence, you are im­plic­itly as­sum­ing a cer­tain math­e­mat­i­cal frame­work with­out jus­tify­ing it. I think you would con­sider this crit­i­cism to be a var­i­ant of #2.

In par­tic­u­lar, I also think about things math­e­mat­i­cally, but when I do so, I don’t use first-or­der logic, but rather in­tu­ition­is­tic type the­ory. Can you give a defi­ni­tion for ex­is­tence which would satisfy me?

• I’m a math­e­mat­i­cal fic­tion­al­ist, so I’m happy to grant that there’s a good sense in which math­e­mat­i­cal dis­course isn’t strictly true, and doesn’t need to be.

Are you ask­ing for a defi­ni­tion of an in­tu­ition­is­tic ‘ex­ists’ pred­i­cate, or for the in­tu­ition­is­tic ex­is­ten­tial quan­tifier?

First, if you ac­cept that math­e­mat­i­cal con­structs are fic­tional, why do you con­sider it valid to define a con­cept in terms of them? Se­cond, I ad­mit I wasn’t clear on this is­sue: The salient part of in­tu­ition­is­tic type the­ory isn’t in­tu­ition­ism, but rather that it is a struc­tural the­ory. This means that state­ments of the form “ex­ists x, P(x)” are not well defined, but rather only state­ments of the form “ex­ists x in A, P(x)” can be made.

• I’m not say­ing it’s a very use­ful defi­ni­tion, just not­ing that it’s very stan­dard. If we’re go­ing to re­ject some­thing it should be be­cause we thought about it for a while and it still seemed wrong (and, ideally, we could un­der­stand why oth­ers think oth­er­wise). We shouldn’t just re­ject it be­cause it sounds weird and a Paradig­mat­i­cally Wrong Writer is as­so­ci­ated with it.

I agree with you that there’s some­thing cir­cu­lar about this defi­ni­tion, if it’s meant to be ex­plana­tory. (Is it?) But I’m not sure that cir­cu­lar­ity is quite that easy to demon­strate. ∃ could be defined in terms of ∀, for in­stance, or in terms of set mem­ber­ship. Then we get:

‘ex­ists(a)’ ≝ ‘¬∀x¬(a=x)’

or

‘ex­ists(a)’ ≝ ‘a∈EXT(=)’

You could ob­ject that ∈ is similarly ques­tion-beg­ging be­cause it can be spo­ken as ‘is an el­e­ment of’, but here we’re deal­ing we’re deal­ing with a more pred­i­ca­tional ‘is’, one we could eas­ily re­place with a verb.

• I sus­pect the above defi­ni­tions look mean­ingful to those who have stud­ied philos­o­phy and math­e­mat­i­cal logic be­cause they have in­ter­nal­ised the math­e­mat­i­cal ma­chin­ery be­hind ‘∃’. But a proper defi­ni­tion wouldn’t sim­ply re­fer you to an­other sym­bol. Rather, you would de­scribe the math­e­mat­ics in­volved di­rectly.

For ex­am­ple, you can define an op­er­a­tor that takes a pos­si­ble world and a pred­i­cate, and tells you if there’s any­thing match­ing that pred­i­cate in the world, in the ob­vi­ous way. In New­to­nian pos­si­ble wor­lds, the first ar­gu­ment would pre­sum­ably be a set of par­ti­cles and their po­si­tions, or some­thing along those lines.

This would be the log­i­cal ex­is­tence op­er­a­tor, ‘∃’. But, it’s not so use­ful since we don’t nor­mally talk about ex­is­tence in rigor­ously defined pos­si­ble wor­lds, we just say some­thing ex­ists or it doesn’t — in the real world. So we in­vent plain “ex­ists”, which doesn’t take a sec­ond ar­gu­ment, but tells you whether there’s any­thing that matches “in re­al­ity”. Which doesn’t re­ally mean any­thing apart from:

$P\(\\text\{exists\}\(Q\$)%20=%20\sum_{w%20\in%20\text{mod­els}}%20(1%20\text{%20if%20}%20\ex­ists_w%20Q%20\text{%20else%20}%200)%20P(w))

or in a more sug­ges­tive format

$P\(\\text\{exists\}\(Q\$)%20=%20\sum_{w%20\in%20\text{mod­els}}%20P(\text{ex­ists}(Q)%20~%7C~%20w)%20P(w))

Where P(w) is your prob­a­bil­ity dis­tri­bu­tion over pos­si­ble wor­lds, which is it­self in turn con­nected to your past ob­ser­va­tions, etc.

Any­way, the point is that the above is how “ex­is­tence” is ac­tu­ally used (things be­come more likely to ex­ist when you re­ceive ev­i­dence more likely to be ob­served in wor­lds con­tain­ing those things). So “ex­is­tence” is sim­ply a propo­si­tion/​func­tion of a pred­i­cate whose prob­a­bil­ity marginal­ises like that over your dis­tri­bu­tion over pos­si­ble wor­lds, and never mind try­ing to define ex­actly when it’s true or false, since you don’t need to. Or some­thing like that.

• If a defi­ni­tion is not meant to be ex­plana­tory, its use­ful­ness in un­der­stand­ing that which is to be defined is limited.

Tak­ing the two al­ter­nate for­mu­la­tions you offered, I can still hear the tel­l­tale “is” beat­ing, from be­neath the floor planks where you hid it:

‘ex­ists(a)’ ≝ ‘¬∀x¬(a=x)’

The “∀” doesn’t re­fer to all e.g. log­i­cally con­structible x, does it? Or to all com­putable x. For the defi­ni­tion to make sense, it needs to re­fer to all x that ex­ist, oth­er­wise we’d con­clude that ‘ex­ists(fly­ing uni­corns)’ is true. Still im­plic­itly refers to that which is to be defined in its defi­ni­tion, ren­der­ing it cir­cu­lar.

‘ex­ists(a)’ ≝ ‘a∈EXT(=)’

What is EXT(=)? Some set of all ex­ist­ing things? If so, would that defi­ni­tion do any work for us? Point­ing at my chair and ask­ing “does this chair ex­ist”, you’d say “well, if it’s a mem­ber of the set of all ex­ist­ing things, it ex­ists”. Why, be­cause all things in the set share the “ex­ist” pred­i­cate. But what does it mean for them to have the “ex­ist” pred­i­cate in the first place? To be part of the set of all ex­ist­ing things, of course. Round and round …

Not much differ­ent from say­ing “if it ex­ists, it ex­ists”. Well, yes. Now what?

• If a defi­ni­tion is not meant to be ex­plana­tory, its use­ful­ness in un­der­stand­ing that which is to be defined is limited.

Ex­actly.

The “∀” doesn’t re­fer to all e.g. log­i­cally con­structible x, does it? Or to all com­putable x. For the defi­ni­tion to make sense, it needs to re­fer to all x that ex­ist, oth­er­wise we’d con­clude that ‘ex­ists(fly­ing uni­corns)’ is true. Still im­plic­itly refers to that which is to be defined in its defi­ni­tion, ren­der­ing it cir­cu­lar.

That’s one op­tion for ex­plain­ing the do­main of ∀. Another is to sim­ply say that that the do­main is the uni­verse, or that it’s ev­ery­thing, or that it’s un­re­stricted. All of those can be ex­pressed with­out speak­ing in terms of ex­is­tence.

If you have no idea what those ideas mean, but un­der­stand ‘ex­ists’, then, sure, maybe you’ll need to de­mand that all those ideas be un­packed in terms of ex­is­tence. But what of it? If you do un­der­stand those terms but not ‘ex­ists’, then in­ter­defin­ing them can be cog­ni­tively sig­nifi­cant for you. Broadly speak­ing, the func­tion of a defi­ni­tion is to re­late a term that isn’t un­der­stood to a term that is. If you already un­der­stand both terms, then the defi­ni­tion won’t be use­ful to you; but that isn’t a crit­i­cism of the defi­ni­tion, if other peo­ple might not un­der­stand both terms as well as you do. It’s just a bi­o­graph­i­cal note about your own level of lin­guis­tic/​con­cep­tual ex­per­tise.

What is EXT(=)? Some set of all ex­ist­ing things?

It’s the ex­ten­sion of the iden­tity pred­i­cate, a set of or­dered pairs. Re­la­tional pred­i­cates of ar­ity n can be treated as sets of n-tu­ples.

If so, would that defi­ni­tion do any work for us?

Do any work for who? What is it you want, ex­actly? If you’ve for­got­ten, the first thing I said to you was “I’m not say­ing it’s a very use­ful defi­ni­tion”. You don’t need to prove it’s cir­cu­lar in or­der to prove it’s use­less, and if you did prove it’s cir­cu­lar (‘cir­cu­lar’ in what sense? is there any finite non-cir­cu­lar chain of defi­ni­tions that define ev­ery term?) that very likely wouldn’t help demon­strate its use­less­ness. So what ex­actly are you try­ing to es­tab­lish, and why?

• Another is to sim­ply say that that the do­main is the uni­verse, or that it’s ev­ery­thing, or that it’s un­re­stricted. All of those can be ex­pressed with­out speak­ing in terms of ex­is­tence.

Any do­main which is not con­strained to iter­ate/​re­fer only to things which them­selves ex­ist would lead to wrong con­clu­sions such as “fly­ing uni­corns ex­ist”.

What is it you want, ex­actly?

To show that the defi­ni­tion you referred to, in all its var­i­ants, isn’t use­ful. I did not for­get that you didn’t claim it was use­ful, just that it was com­mon, but I also no­ticed you did not ex­plic­itly agree that it was not use­ful. If you do agree on that, there is no need to fur­ther dwell on use­less rephras­ings.

I agree that since the body of hu­man knowl­edge is limited, any defi­ni­tion must even­tu­ally con­tain cir­cles of some size. How­ever, not all cir­cles are cre­ated equal: To be use­ful, a defi­ni­tion must re­fer to some differ­ent part of your knowl­edge base, just be­cause with­out in­tro­duc­ing new in­for­ma­tion, there is noth­ing which could be use­ful.

“2 is defined as some­thing with the prop­erty of be­ing 2” isn’t use­ful be­cause there is noth­ing new in­tro­duced. “That which ex­ists, ex­ists” isn’t use­ful for the same rea­son. Be­cause all the defi­ni­tions you referred to still con­tain “ex­ist”, the ad­di­tional in­for­ma­tion (“things in a set”) is su­perflu­ously added, the “ex­ist” on the right part of the defi­ni­tion still isn’t un­packed. Hence, no ad­di­tional in­for­ma­tion is in­tro­duced, and the defi­ni­tion use­less, be­ing equiv­a­lent to “2 is defined as 2″.

“Pain is when some­thing which is in the set of ‘be­ing able to ex­pe­rience pain’ ex­pe­riences pain” just re­duces to “pain is when pain”, which must be use­less since it con­tains no ad­di­tional con­cepts.

If the ad­di­tional “iden­tity” as­pects etcetera helped any in ex­plain­ing the con­cept of “ex­ist”, then the defi­ni­tion would not need to re­fer again to just the same “ex­ist” which the “iden­tity” sup­pos­edly helped ex­plain.

• Any do­main which is not con­strained to iter­ate/​re­fer only to things which them­selves ex­ist would lead to wrong con­clu­sions such as “fly­ing uni­corns ex­ist”.

If I’m not mi­s­un­der­stand­ing you, you’re ad­vo­cat­ing a view like Gra­ham Priest does here, that our quan­tifiers should range over any­thing we can mean­ingfully talk about (if not wider?) un­til we re­strict them fur­ther. I’m in­clined to agree. We both dis­sent from the or­tho­dox defi­ni­tion I posted above, then. You’ll need to dig up a Quinean if you want to hear counter-ar­gu­ments.

I also no­ticed you did not ex­plic­itly agree that it was not use­ful.

Well, I’m sure it’s been use­ful to some­one at some point. It lets lo­gi­ci­ans get away with­out ap­peal­ing to an ‘ex­ists’ pred­i­cate. Lo­gi­ci­ans are gen­er­ally much more at­tached to ‘is iden­ti­cal to’ than to ‘ex­ists’. Again, you’ll have to ex­plain ex­actly what kind of use you want out of the ideal Defi­ni­tion of Ex­is­tence so I can eval­u­ate whether the above ones I tossed about are use­ful with re­spect to that goal. What are some ex­am­ples of new in­sights or prac­ti­cal goals you were hop­ing or ex­pect­ing to achieve by defin­ing ‘ex­ists’?

To be use­ful, a defi­ni­tion must re­fer to some differ­ent part of your knowl­edge base

Could you say more about what you mean by ‘differ­ent parts of your knowl­edge base’? Is there a heuris­tic for de­cid­ing when things are parts of the same knowl­edge base?

“2 is defined as some­thing with the prop­erty of be­ing 2” isn’t use­ful be­cause there is noth­ing new in­tro­duced.

Is “2 is defined as SS∅” use­ful? Or “2 is defined as {{},{{}}}”? Or “2 is defined as 1+1”? Are there any use­ful defi­ni­tions of 2?

Be­cause all the defi­ni­tions you referred to still con­tain “ex­ist”

What do you mean by “con­tain”? They didn’t make refer­ence to ex­is­tence twice. You noted we could re­verse the defi­ni­tions or build a chain, but that’s true of any defi­ni­tions. (If they weren’t dread­fully bor­ing, we’d prob­a­bly not call them defi­ni­tions.)

Do you mean that they pre­sup­posed an un­der­stand­ing of ex­is­tence, i.e., if you didn’t first un­der­stand ex­is­tence then you couldn’t un­der­stand my defi­ni­tions? Or do you mean that con­cepts are com­bi­na­to­rial, and the con­cepts I ap­pealed to all have as com­po­nents the con­cept ‘ex­is­tence’?

“Pain is when some­thing which is in the set of ‘be­ing able to ex­pe­rience pain’ ex­pe­riences pain” just re­duces to “pain is when pain”, which must be use­less since it con­tains no ad­di­tional con­cepts.

Your defi­ni­tions are cir­cu­lar in the strong sense that they’re of the form ‘… a … = … a …’. But in­ter­est­ing and use­ful iden­tities and equal­ities can re-use the term on both sides. Gen­er­ally they then re­duce to pred­i­ca­tions. For in­stance, “pain oc­curs when some­thing ex­pe­riences pain” is a pretty hideous at­tempt at a defi­ni­tion, but it doesn’t re­duce to “pain is when pain” (which isn’t even a sen­tence); it re­duces to “pain is an ex­pe­rience”. That’s po­ten­tially use­ful, but it would’ve been more use­ful if we hadn’t dressed it up as though it were an anal­y­sis.

All of this seems a bit beside the point, though. None of the defi­ni­tions I cited re-used the same term, whereas all the ex­am­ples you made up to crit­i­cize them do re-use the same term on both sides of the defi­ni­tion. If your goal is to draw an anal­ogy that prob­le­ma­tizes cer­tain prac­tices in math­e­mat­i­cal logic, you should in­clude at least some prob­lem cases that look like the for­mu­las I first posted.

• What do you mean by “con­tain”? They didn’t make refer­ence to ex­is­tence twice (...) None of the defi­ni­tions I cited re-used the same term

That’s prob­a­bly our main point of con­tention, since I’d ar­gue that they do. Not ev­i­dent when do­ing shal­low pars­ing on a very su­perfi­cial level, but plainly there nonethe­less.

Say I gave you this defi­ni­tion: “2 is defined as (the fol­low­ing in ROT13) ‘gur ahzrevp inyhr bs gur jbeq gjb’”, with the ROT13 part (for your con­ve­nience) spel­ling “the nu­meric value of the word two”. I’d say that such a defi­ni­tion still reused the term to be defined in the right part of the defi­ni­tion, wouldn’t you?

Your defi­ni­tions by ne­ces­sity re­duce to ‘ex­ists(a) =(def) there is an x such that ex­ists(x) and (x=a)’

It is triv­ial to show that if your uni­ver­sal or your ex­is­ten­tial quan­tifier’s do­main (i.e. the pos­si­ble val­ues which x could take) were any­thing other than pre­cisely those x’s for which ex­ists(x) is true, the defi­ni­tion would be wrong:

Say the do­main set con­tained only {blue, green}, so x could only match to blue or green. Then ex­ists(a) would only re­turn true for blue and green. Not enough!

Say the set al­lowed for x to match to any­thing which is con­ceiv­able, such as a fly­ing spaghetti mon­ster (or what­ever). Then ex­ists(fly­ing spaghetti mon­ster) would eval­u­ate to ‘true’, since there would be such an x. Too much!

The defi­ni­tion works only iff the do­main of ei­ther the uni­ver­sally quan­tified or the ex­is­ten­tially quan­tified ver­sion of the defi­ni­tion were pre­cisely “the things that ex­ist”, i.e. for which ex­ists(x) re­turned true.

Hid­ing in rather plain sight, don’t you think? Even the ROT13 offered more ob­scu­rity.

It lets lo­gi­ci­ans get away with­out ap­peal­ing to an ‘ex­ists’ pred­i­cate.

If only. I dis­agree that it does (be­cause of the above).

What are some ex­am­ples of new in­sights or prac­ti­cal goals you were hop­ing or ex­pect­ing to achieve by defin­ing ‘ex­ists’?

Well, some­thing ‘new’ to work with. Where ‘we’ could go from there would prob­a­bly de­pend on the con­cepts the defi­ni­tion re­lates ‘ex­ists’ to. As with so much else, no prac­ti­cal goal other than the usual men­tal onanism. In our par­tic­u­lar ex­change, mostly show­ing that the defi­ni­tion you gave can­not be use­ful.

Could you say more about what you mean by ‘differ­ent parts of your knowl­edge base’? Is there a heuris­tic for de­cid­ing when things are parts of the same knowl­edge base?

A defi­ni­tion must es­tab­lish some re­la­tion of any kind to some other con­cept, or pred­i­cate. ‘Differ­ent part’ as in ‘not only the ex­act same con­cept which is to be ex­plained’.

You are right with “pain is an ex­pe­rience” offer­ing a con­nec­tion to some other con­cept, and thus be­ing po­ten­tially use­ful. How­ever, the defi­ni­tion for ex­ist we are dis­cussing offers no such ad­di­tional con­cept. You can define a set of things which share an at­tribute for any­thing (that set could be empty), that’s no new in­for­ma­tion re­gard­ing the thingie in ques­tion (un­less you start list­ing ex­am­ples), it does not con­strain the con­cept space in any way.

FWIW, if some­one said “pain: pain is an ex­pe­rience”, that would be quite a poor defi­ni­tion, but as you cor­rectly pointed out, at least we would’ve learned some­thing new.

A good lit­mus test may be “if you were tasked with ex­plain­ing your con­cepts to some strange alien, could it po­ten­tially glean any­thing from your defi­ni­tion”? Pain is an ex­pe­rience: yes (new in­for­ma­tion). Ex­ists(x): you can define a set for all x’s for which ex­ists(x) is true: no (alien looks at you un­com­pre­hend­ingly).

• I’d say that such a defi­ni­tion still reused the term to be defined in the right part of the defi­ni­tion, wouldn’t you?

So your claim is that uni­ver­sal quan­tifi­ca­tion, and iden­tity and/​or set mem­ber­ship, are all in effect just triv­ial lin­guis­tic obfus­ca­tions of ex­is­tence?

Your defi­ni­tions by ne­ces­sity re­duce to ‘ex­ists(a) =(def) there is an x such that ex­ists(x) and (x=a)’

The idea that ex­is­tence is in some way a con­cep­tual pre­req­ui­site for the par­tic­u­lar quan­tifier is an in­ter­est­ing idea, and I could imag­ine good ar­gu­ments be­ing made for it. Cer­tainly Gra­ham Priest would agree with your above claim. But I don’t see any cor­re­spond­ing rea­son yet to think this about ‘ex­ists(a)’ ≝ ‘a∈EXT(=)’.

It is triv­ial to show that if your uni­ver­sal or your ex­is­ten­tial quan­tifier’s do­main (i.e. the pos­si­ble val­ues which x could take) were any­thing other than pre­cisely those x’s for which ex­ists(x) is true, the defi­ni­tion would be wrong

Why does that mat­ter? It’s triv­ial to show that is the set of pri­mary col­ors were a differ­ent set, then ex­ten­sional defi­ni­tions of the pri­mary col­ors would fail. But this doesn’t un­der­mine ex­ten­sional defi­ni­tions of pri­mary col­ors.

Per­haps what you’re try­ing to get at is that we couldn’t con­struct the iden­tity set, or the proper do­main for our quan­tifiers, with­out prior knowl­edge that amounts to knowl­edge of which things ex­ist? I.e. we couldn’t build an al­gorithm that ac­tu­ally gives us the right an­swers to ‘are a and b iden­ti­cal?’ or ‘is a an ob­ject in the do­main of dis­course?’ with­out first un­der­stand­ing on some level what sorts of things ex­ist? Is that the idea? A defi­ni­tion then is un­ex­plana­tory (or ‘use­less’) if the definiens can­not be con­structed with perfect re­li­a­bil­ity with­out first grasp­ing the definien­dum.

The defi­ni­tion works only iff the do­main of ei­ther the uni­ver­sally quan­tified or the ex­is­ten­tially quan­tified ver­sion of the defi­ni­tion were pre­cisely “the things that ex­ist”, i.e. for which ex­ists(x) re­turned true.

Yes… but, then, that’s true for ev­ery defi­ni­tion. What­ever defi­ni­tion of ‘bird’ we give will, ideally, re­turn pre­cisely the set of birds to us. It would be a prob­lem if the two didn’t co­in­cide, surely; so why is it equally a prob­lem if the two do co­in­cide? I can’t make an ob­jec­tion out of this, un­less we go with some­thing like the one in the pre­vi­ous para­graph.

If only. I dis­agree that it does (be­cause of the above).

Well, it still does. You don’t use an ‘ex­ists’ pred­i­cate in the logic. Your claim is philo­soph­i­cal or metase­man­tic; it’s not about what log­i­cal or non­log­i­cal pred­i­cates we use in a sys­tem. Lo­gi­ci­ans have found a neat trick for re­duc­ing how many prim­i­tives they need to be ex­pres­sively com­plete; you’re ob­ject­ing in effect that their trick doesn’t help us un­der­stand the True Na­ture Of Be­ing, but one sus­pects that this is or­thog­o­nal to the origi­nal idea, at least as many lo­gi­ci­ans see it.

Well, some­thing ‘new’ to work with.

How ’bout iden­tity?

‘Differ­ent part’ as in ‘not only the ex­act same con­cept which is to be ex­plained’.

Are you say­ing that iden­tity, ex­is­tence, uni­ver­sal quan­tifi­ca­tion, and par­tic­u­lar quan­tifi­ca­tion are all the ex­act same con­cept? If so, your con­cepts must be very mul­ti­faceted things!

How­ever, the defi­ni­tion for ex­ist we are dis­cussing offers no such ad­di­tional con­cept.

So you’re at a min­i­mum say­ing that ‘∃’ and ‘ex­ists’ are the same con­cept. Are you say­ing the same for ‘∀’, ‘¬’, ‘∈’, ‘=’, etc.?

This pa­per might in­ter­est you; it also dis­cusses trans­lata­bil­ity into alien lan­guages with differ­ent ways e.g. of quan­tify­ing: Be­ing, ex­is­tence, and on­tolog­i­cal com­mit­ment.

• So your claim is that uni­ver­sal quan­tifi­ca­tion, and iden­tity and/​or set mem­ber­ship, are all in effect just triv­ial lin­guis­tic obfus­ca­tions of ex­is­tence?

They are tools which in them­selves can con­struct re­la­tion­ships that fur­ther de­scribe that which is to be de­scribed. They just don’t in this case. Syn­tac­tic con­cate­na­tion of op­er­a­tors doesn’t equal boun­tiful se­man­tic con­tent. Just like you can con­struct mean­ingless sen­tences even though those sen­tences still are com­posed of let­ters.

“a ex­ists if there is some x which ex­ists which is the ex­act same as a”, “If a and x are iden­ti­cal (the same ac­tual thing) and x ex­ists, we can con­clude that a ex­ists, since it is in fact x”.

Th­ese defi­ni­tions can be used for most any prop­erty re­plac­ing “ex­ists”. The par­tic­u­lar us­age of ‘∀’, ‘¬’, ‘∈’, ‘=’, “iden­tity” or what have you in this case doesn’t add any con­tent, or any con­cepts, if it’s just blovi­at­ing re­duc­ing to “if x ex­ists, and x is a, then a ex­ists”, or in short, if P(a) then P(a).

What­ever defi­ni­tion of ‘bird’ we give will, ideally, re­turn pre­cisely the set of birds to us.

Ideally. More le­niently, “use­ful” would mean that given a defi­ni­tion, we would at least have some changed no­tion of whether at least one thing, or class of things, be­longs in the set of birds or not. Even when some­one just told you that a duck is a bird and noth­ing else, you would have learned some­thing about birds. At least as an alien at least you could an­swer yes when pointed to a duck, if noth­ing else.

Ex­plain­ing “is a bird(x)” by refer­ring to a set which by defi­ni­tion con­tains all things which are birds, with­out giv­ing any fur­ther ex­pla­na­tion or ex­am­ples, and then say­ing that if x is in the set of all birds, it is a bird, doesn’t give us any in­for­ma­tion what­so­ever about birds, and amounts to say­ing “well, if it’s a bird, and we pos­tu­late a set in which there would be all the birds, that bird would be in that set!”. Who woulda thunk?

Say­ing “there are chairs which ex­ist” gives us more in­for­ma­tion about what ex­ists means then the first two defi­ni­tions we’re talk­ing about.

Con­cern­ing the ‘ex­ists(a)’ ≝ ‘a∈EXT(=)‘, I can’t com­ment be­cause I have no idea what pre­cisely is meant by that ‘ex­ten­sion’ of =. Is it sup­posed to be ex­actly restat­able as equiv­a­lent the other two defi­ni­tions? If so, nat­u­rally the same ar­gu­ments ap­ply. If not, can you give fur­ther in­for­ma­tion about this mys­te­ri­ous ex­ten­sion?

• I think we share the same views, at least in spirit. I’m just not satis­fied by your ar­gu­ments for them.

First, your analo­gies weren’t rele­vantly similar to the origi­nal equa­tions. Se­cond, your pre­vi­ous ar­gu­ments de­pended on some­what mys­te­ri­ous no­tions of ‘con­cept con­tain­ment’, similar to Kant’s origi­nal no­tion of anal­y­sis, that I sus­pect will lead us into trou­ble if we try to pre­cisely define them. And third, your new ar­gu­ment seems to de­pend on a no­tion of these sym­bols as ‘purely syn­tac­tic’, de­void of se­man­tics. But I find this if any­thing even less plau­si­ble than your prior ob­jec­tions. Per­haps there’s a sense in which ‘not not p’ gives us no im­por­tant or use­ful in­for­ma­tion that wasn’t origi­nally con­tained in ‘p’ (which I think is your ba­sic in­tu­ition), but it has noth­ing to do with whether the sym­bol ‘not’ is ‘purely syn­tac­tic’; if words like ‘all’ and ‘some’ and ‘is’ aren’t bare syn­tax in English, then I see no rea­son for them to be so in more for­mal­ized lan­guages.

In­for­mally stated, a con­clu­sion like ‘the stan­dard way of defin­ing ex­is­tence in pred­i­cate calcu­lus is kind of silly and un­in­for­ma­tive’ is clearly true—its truth is far more cer­tain than is the truth of the premises that have been used so far to ar­gu­ment for it. So per­haps we should leave it at that and re­turn to the prob­lem later from other an­gles, if we keep hit­ting a wall re­sult­ing from our lack of a gen­eral the­ory of ‘con­cept con­tain­ment’ or ‘se­man­ti­cally triv­ial or null as­ser­tion’?

• (I don’t claim to be able to iden­tify all use­less defi­ni­tions as use­less, just as I can’t la­bel all sets which are in fact the empty set cor­rectly. That is not nec­es­sary.)

I’m talk­ing about the spe­cific first two defi­ni­tions you gave. Let me give it one more try.

foo(a) is a pred­i­cate, it eval­u­ates to true or false (in bi­nary logic). This is not new in­for­ma­tion (edit: if we go into the whole or­deal already know­ing we set out to define the pred­i­cate foo(.)), so the let­ter se­quence foo(a) it­self doesn’t tell us any­thing new (e.g. foo(‘some iden­ti­fied el­e­ment’)=true would).

You can gather ev­ery­thing for which foo(‘that thing’) is true in a set. This does not tell us any­thing new about the pred­i­cate. The set could be empty, it could have one el­e­ment, it could be in­finitely large.

We’re not con­strain­ing foo(.) in any way, we’re sim­ply say­ing “we define a set con­tain­ing all the things for which foo(thing) is true”.

Then we’re go­ing through all the differ­ent el­e­ments of that set (which could be no el­e­ments, or in­finitely many el­e­ments), and if we find an el­e­ment which is the ex­act same as ‘a’, we con­clude that foo(a) is true.

The ‘iden­tity’ is not in­tro­duc­ing any new spe­cific in­for­ma­tion what­so­ever about what foo(.) means. You can do the ex­act same with any pred­i­cate. If ‘a’ is ‘x’, then they are iden­ti­cal. You can re­place any refer­ence to ‘a’ with ‘x’ or vice versa.

Which vari­able name you use to re­fer to some el­e­ment doesn’t tell us any­thing about the el­e­ment, un­less it’s a de­scrip­tive name. The let­ter ‘a’ doesn’t tell you any­thing about an el­e­ment of a set, nor does ‘x’. And if ‘a’ = ‘x’, there is no differ­ence. It’s the clas­si­cal tau­tol­ogy: a=a. x=x. There is no ‘new in­for­ma­tion’ what­so­ever about the pred­i­cate foo(.) there.

In fact, the defi­ni­tions you gave can be ex­actly used for any pred­i­cate, any pred­i­cate at all! (… which takes one ar­gu­ment. The first two defi­ni­tions, we’re still un­clear on the third.) An alien could no more know you’re talk­ing about ‘ex­is­tence’ than about ‘con­tains straw­berry seeds’, if not for how we named the pred­i­cate go­ing in.

You can prob­a­bly re­place foo(a) with ex­ists(a) on your own …

That is why I re­ject the defi­ni­tion as wholly un­in­for­ma­tive and use­less. The most in­ter­est­ing part is that ex­ist­ing is de­scribed as a pred­i­cate at all, and that’s an (un­ex­plained) as­sump­tion made be­fore the fully generic and such use­less defi­ni­tion is made.

Which of the above do you dis­agree with? (Re­gard­ing ‘con­cept con­tain­ment’, I very much doubt we’d run into much trou­ble with that no­tion. An equiv­a­lent for­mu­la­tion to ‘con­cept con­tain­ment’ when say­ing any­thing about a pred­i­cate would be ‘any in­for­ma­tion which is not equally ap­pli­ca­ble to all pos­si­ble pred­i­cates’.)

• I’ve had this cir­cu­lar dis­cus­sion with Rob­bBB for a cou­ple of hours. Maybe you will have bet­ter luck.

• I should prob­a­bly let Rob an­swer for him­self, but he did say that ex­is­tence is fre­quently defined in terms of iden­tity, not by iden­tity.

• Don’t the say­ings sug­gest that rec­og­niz­ing this bug in one­self or oth­ers doesn’t re­quire any neu­ral-level un­der­stand­ing of cog­ni­tion?

Clearly, bug-recog­ni­tion at the level de­scribed in this blog post does not so re­quire, be­cause I have no idea what the biolog­i­cal cir­cuitry that ac­tu­ally rec­og­nizes a tiger looks like, though I know it hap­pens in the tem­po­ral lobe.

• At risk of sound­ing ig­no­rant, it’s not clear to me how Net­work 1, or the net­works in the pre­req­ui­site blog post, ac­tu­ally work. I know I’m sup­posed to already have su­perfi­cial un­der­stand­ing of neu­ral net­works, and I do, but it wasn’t im­me­di­ately ob­vi­ous to me what hap­pens in Net­work 1, what the al­gorithm is. Be­fore you roll your eyes, yes, I looked at the Ar­tifi­cial Neu­ral Net­work Wikipe­dia page, but it still doesn’t help in de­ter­min­ing what yours means.

• Silas, I’m sure you’ve seen the an­swer by now, but for any­one who comes later, if you think of the di­a­grams above as Bayes Net­works then you’re on the right track.

• Net­work 1 would work just fine (ig­nor­ing how you’d go about train­ing such a thing). Each of the N^2 edges has a weight ex­press­ing the re­la­tion­ship of the ver­tices it con­nects. E.g. if nodes A and B are strongly anti-cor­re­lated the weight be­tween them might be −1. You then fix the nodes you know and then ei­ther solve the sys­tem an­a­lyt­i­cally or through nu­mer­i­cal iter­a­tion un­til it set­tles down (hope­fully!) and then you have ex­pec­ta­tions for all the un­known.

Typ­i­cal net­works for this sort of thing don’t have cy­cles so sta­bil­ity isn’t a ques­tion, but that doesn’t mean that net­works with cy­cles can’t work and reach sta­ble solu­tions. Some er­ror cor­rect­ing codes have graph rep­re­sen­ta­tions that aren’t much bet­ter than this. :)

• Well, is “Pluto is a planet” the right pass­word, or not? ;)

• I was won­der­ing if any­one would no­tice that Net­work 2 with lo­gis­tic units was ex­actly equiv­a­lent to Naive Bayes.

To be pre­cise, Naive Bayes as­sumes that within the blegg cluster, or within the rube cluster, all re­main­ing var­i­ance in the char­ac­ter­is­tics is in­de­pen­dent; or to put it an­other way, once we know whether an ob­ject is a blegg or a rube, this screens off any other in­for­ma­tion that its shape could tell us about its color. This isn’t the same as as­sum­ing that the only causal in­fluence on a blegg’s shape is its blegg-ness—in fact, there may not be any­thing that cor­re­sponds to blegg-ness.

But one rea­son that Naive Bayes does work pretty well in prac­tice, is that a lot of ob­jects in the real world do have causal essences, like the way that cat DNA (which doesn’t mix with dog DNA) is the causal essence that gives rise to all the sur­face char­ac­ter­is­tics that dis­t­in­guish cats from dogs.

The other rea­son Naive Bayes works pretty well in prac­tice is that it of­ten suc­cess­fully chops up a prob­a­bil­ity dis­tri­bu­tion into clusters even when the real causal struc­ture looks noth­ing like a cen­tral in­fluence.

• Silas, let me try to give you a lit­tle more ex­plicit an­swer. This is how I think it is meant to work, al­though I agree that the de­scrip­tion is rather un­clear.

Each dot in the di­a­gram is an “ar­tifi­cial neu­ron”. This is a lit­tle ma­chine that has N in­puts and one out­put, all of which are num­bers. It also has an in­ter­nal “thresh­old” value, which is also a num­ber. The way it works is it com­putes a “weighted sum” of its N in­puts. That means that each in­put has a “weight”, an­other num­ber. It mult­plies weight 1 times in­put 1, plus weight 2 times in­put 2, plus weight 3 times in­put 3, and so on, to get the weighted sum. (Note that weights can also be nega­tive, so some in­puts can lower the sum.) It then com­pares this with the thresh­old value. If the sum is greater than the thresh­old, it out­puts 1, oth­er­wise it out­puts 0. If a neu­ron’s out­put is a 1 we say it is “firing” or “ac­ti­vated”.

The di­a­gram shows how the ANs are hooked up into a net­work, an ANN. Each neu­ron in Figure 1 has 5 in­puts. 4 of them come from the other 4 neu­rons in the cir­cuit and are rep­re­sented by the lines. The 5th comes from the par­tic­u­lar char­ac­ter­is­tic which is as­signed to that neu­ron, i.e. color, lu­mi­nance, etc. If the ob­ject has that prop­erty, that 5th in­put is a 1, else a 0. All of the con­nec­tions in this net­work are bidi­rec­tional, so that neu­ron 1 re­ceives in­put from neu­ron 2, while neu­ron 2 re­ceives in­put from neu­ron 1, etc.

So to think about what this net­work does, we imag­ine in­putting the 5 qual­ities which are ob­served about an ob­ject to the “5th” in­put of each of the 5 neu­rons. We imag­ine that the cur­rent out­put lev­els of all the neu­rons are set to some­thing ar­bi­trary, let’s just say zero. And per­haps ini­tially the weights and thresh­old val­ues are also quite ran­dom.

When we give the neu­rons this ac­ti­va­tion pat­tern, some of them may end up firing and some may not, de­pend­ing on how the weights and thresh­olds are set up. And once a neu­ron starts firing, that feeds into one of the in­puts of the other 4 neu­rons, which may change their own state. That feeds back through the net­work as well. This may lead to os­cilla­tion or an un­sta­ble state, but hope­fully it will set­tle down into some pat­tern.

Now, ac­cord­ing to var­i­ous rules, we will typ­i­cally ad­just the weights. There are differ­ent ways to do this, but I think the con­cept in this ex­am­ple is that we will try to make the out­put of each neu­ron match its “5th in­put”, the ob­ject char­ac­ter­is­tic as­signed to that neu­ron. We want the lu­mi­nance neu­ron to ac­ti­vate when the ob­ject is lu­mi­nous, and so on. So we in­crease weights that will tend to move the out­put in that di­rec­tion, de­crease weights that would move it the other way, tweak the thresh­olds a bit. We do this re­peat­edly with differ­ent ob­jects, mak­ing small changes to the weights—this is “train­ing” the net­work. Even­tu­ally it hope­fully set­tles down and does pretty much what we want it to.

Now we can give it some wrong or am­bigu­ous in­puts, and ideally it will still pro­duce the out­put that is sup­posed to go there. If we in­put 4 of the char­ac­ter­is­tics of a blegg, the 5th neu­ron will also show the blegg-style out­put. It has “learned” the char­ac­ter­is­tics of bleggs and rubes.

In the case of Net­work 2, the setup is sim­pler—each edge neu­ron has just 2 in­puts: its unique ob­served char­ac­ter­is­tic, and a feed­back value from the cen­ter neu­ron. Each one performs its weighted-sum trick and sends its out­put to the cen­ter one, which has its own set of weights and a thresh­old that de­ter­mines whether it ac­ti­vates or not. In this case we want to teach the cen­ter one to dis­t­in­guish bleggs from rubes, so we would train it that way—ad­just­ing the weights a lit­tle bit at a time un­til we find it firing when it is a blegg but not when it is a rube.

Any­way, I know this is a long ex­pla­na­tion but I didn’t see any­one else mak­ing it ex­plicit. Hope­fully it is mostly cor­rect.

• Silas, the di­a­grams are not neu­ral net­works, and don’t rep­re­sent them. They are graphs of the con­nec­tions be­tween ob­serv­able char­ac­ter­is­tics of bleggs and rubes.

• Ex­cept that around 2% of blue egg-shaped ob­jects con­tain pal­la­dium in­stead. So if you find a blue egg-shaped thing that con­tains pal­la­dium, should you call it a “rube” in­stead? You’re go­ing to put it in the rube bin—why not call it a “rube”?

But when you switch off the light, nearly all bleggs glow faintly in the dark. And blue egg-shaped ob­jects that con­tain pal­la­dium are just as likely to glow in the dark as any other blue egg-shaped ob­ject.

So if you find a blue egg-shaped ob­ject that con­tains pal­la­dium, and you ask “Is it a blegg?”, the an­swer de­pends on what you have to do with the an­swer: If you ask “Which bin does the ob­ject go in?”, then you choose as if the ob­ject is a rube. But if you ask “If I turn off the light, will it glow?”, you pre­dict as if the ob­ject is a blegg. In one case, the ques­tion “Is it a blegg?” stands in for the dis­guised query, “Which bin does it go in?”. In the other case, the ques­tion “Is it a blegg?” stands in for the dis­guised query, “Will it glow in the dark?”

This is amaz­ing, but too fast. It’s too im­por­tant and counter in­tu­itive to do that fast, and we ab­solutely dev­as­tat­ingly painfully need it in philos­o­phy de­part­ments. Please help us. This is an S.O.S. our ship is sink­ing. Write this again longer, so that I can show it to peo­ple and change their minds. Peo­ple who are not less­wrong lit­ter­ate. It’s too im­por­tant to go over that fast, any­way. I also ask that you, or any­one for that mat­ter, find a sim­ple real world ex­am­ple which has roughly analo­gous pa­ram­e­ters to the ones you speci­fied, and use that as the ex­am­ple in­stead. Some­body do it [please, I’m too busy ar­gu­ing with philos­o­phy proffe­sors about it, and there are bet­ter writ­ers on this site that could take up the en­deavor. It would be use­ful and well liked any­way chances are, and I’ll give what re­wards I can.

• Silas, see Naive Bayes clas­sifier for how an “ob­serv­able char­ac­ter­is­tics graph” similar to Net­work 2 should work in the­ory. It’s not clear whether Hopfield or Heb­bian learn­ing can im­ple­ment this, though.

To put it sim­ply, Net­work 2 makes the strong as­sump­tion that the only in­fluence on fea­tures such as color or shape is whether the ob­ject is a a rube or a blegg. This is an ex­tremely strong as­sump­tion which is of­ten in­ac­cu­rate; de­spite this, naive Bayes clas­sifiers work ex­tremely well in prac­tice.

• I think the stan­dard anal­y­sis is es­sen­tially cor­rect. So let’s ac­cept that as a premise, and ask: Why do peo­ple get into such an ar­gu­ment? What’s the un­der­ly­ing psy­chol­ogy?

I think that peo­ple his­tor­i­cally got into this ar­gu­ment be­cause they didn’t know what sound was. It is a philo­soph­i­cal ap­pendix, a ves­ti­gial ar­gu­ment that no longer has any in­ter­est.

• There is a good quote by Alan Watts re­lat­ing to the first para­graphs.

Prob­lems that re­main per­sis­tently in­sol­u­ble should always be sus­pected as ques­tions asked in the wrong way.

• So.. is this pretty much a re­sult of our hu­man brains want­ing to clas­sify some­thing? Like, if some­thing doesn’t nec­es­sar­ily fit into a box that we can neatly file away, our brains puz­zle where to clas­sify it, when ac­tu­ally it is its own clas­sifi­ca­tion… if that makes sense?

• The ex­tra node in net­work 2 cor­re­sponds to as­sign­ing a la­bel, an ab­stract term to the thing be­ing rea­soned about. I won­der if a be­ing with a net­work-1 mind would have ever evolved in­tel­li­gence. As­sign­ing names to things, cre­at­ing cat­e­gories, al­lows us to rea­son about much more com­plex things. If the price we pay for that is oc­ca­sion­ally get­ting into a con­fus­ing or pointless ar­gu­ment about “is it a rube or a blegg?” or “does a tree fal­ling in a de­serted for­est make a sound?” or “is Pluto a planet?”, that seems like a fair price to pay.

• billswift: Okay, if they’re not neu­ral net­works, then there’s no ex­pla­na­tion of how they work, so I don’t un­der­stand how to com­pare them all. How was I sup­posed to know from the posts how they work?

• Once again, great post.

Eliezer: “We know where Pluto is, and where it’s go­ing; we know Pluto’s shape, and Pluto’s mass—but is it a planet? And yes, there were peo­ple who said this was a fight over defi­ni­tions...”

It was a fight over defi­ni­tions. Astronomers were try­ing to up­date their nomen­cla­ture to bet­ter han­dle new data (large bod­ies in the Kuiper belt). Pluto wasn’t quite like the other planets but it wasn’t like the other as­ter­oids ei­ther. So they called it a dwarf-planet. Seems pretty rea­son­able to me. http://​​en.wikipe­dia.org/​​wiki/​​Dwarf_planet

• Another ex­am­ple:

Yeah, you could tell about your gen­der, sex, sex­ual ori­en­ta­tion and gen­der role… but are you a boy or are you a girl???

• I’m a lit­tle bit lazy and already clicked here from the re­duc­tion­ism ar­ti­cle, is the philo­soph­i­cal claim that of a non-elimi­na­tive re­duc­tion­ism? Or does Eliezer ren­der a more elimi­na­tivist var­i­ant of re­duc­tion­ism? (I’m not im­ply­ing that there is a con­tra­dic­tion be­tween quoted sources, only some amount of “ten­sion”.)

• I tend to re­solve this sort of “is it re­ally an X?” is­sue with the ques­tion “what’s it for?” This is similar to mak­ing a be­lief pay rent: why do you care if it’s re­ally an X?

• Silas,

The es­sen­tial idea is that net­work 1 can be trained on a tar­get pat­tern, and af­ter train­ing, it will con­verge to the tar­get when ini­tial­ized with a par­tial or dis­torted ver­sion of the tar­get. Wikipe­dia’s ar­ti­cle on Hopfield net­works has more.

Both types of net­works can be used to pre­dict ob­serv­ables given other ob­serv­ables. Net­work 1, be­ing to­tally con­nected, is slower than net­work 2. But net­work 2 has a node which cor­re­sponds to no ob­serv­able thing. It can leave one with the feel­ing that some ques­tion has not been com­pletely an­swered even though all the ob­serv­ables have known states.

• I’ve always been vaguely aware of this, but never seen it laid out this clearly—good post. The more you think about it, the more ridicu­lous it seems. “No, we can know whether it’s a planet or not! We just have to know more about it!

Scott, you for­got ‘I yam what I yam and that’s all what I yam’.

• Given that this bug re­lates to neu­ral struc­ture on an ab­stract, rather than biolog­i­cal level, I won­der if it’s a cog­ni­tive uni­ver­sal be­yond just hu­mans? Would any prag­matic AGI built out of neu­rons nec­es­sar­ily have the same bias?

• The same bias to...what? From the in­side, the AI might feel “con­flicted” or “weirded out” by a yel­low, furry, el­lip­soid shaped ob­ject, but that’s not nec­es­sar­ily a bug: maybe this feel­ing ac­cu­mu­lates and even­tu­ally re­sults in cre­at­ing new sub-cat­e­gories. The AI won’t nec­es­sar­ily get into the ar­gu­ment about defi­ni­tions, be­cause while part of that ar­gu­ment comes from the neu­ral ar­chi­tec­ture above, the other part comes from the need to win ar­gu­ments—and the evolu­tion­ary bias for hu­mans to win ar­gu­ments would not be pre­sent in most AI de­signs.

• For what it’s worth, I’ve always re­sponded to ques­tions such as “Is Pluto a planet?” in a man­ner more similar to Net­work 1 than Net­work 2. The de­bate strikes me as bor­der­line non­sen­si­cal.

• An­a­lyt­i­cally, I’d have to agree, but the first thing that I say when I get this ques­tion is no. I ex­plain that it de­pends on defi­ni­tion, that have a defi­ni­tion for planet, and we know the char­ac­ter­is­tics of Pluto. Pluto doesn’t match the re­quire­ments in the defi­ni­tion, ergo, not a planet.

Lots eas­ier than try­ing to ex­plain to some­one they don’t ac­tu­ally know what ques­tion they’re ask­ing, al­though it’s of course a more el­e­gant an­swer.

• So is it a planet or not?

• I doubt I’d be able to fully grasp this if I had not first read hp­mor, so thanks for that. Also, eggs vs ovals.

• 9 Feb 2015 23:48 UTC
0 points

This ar­ti­cle ar­gues to the effect that the node cat­e­goris­ing an un­named cat­e­gory over ‘Blegg’ and ‘Rube’ ought to be got rid of, in favour of a thought-sys­tem with only the other five nodes. This brings up the fol­low­ing ques­tions. Firstly, how are we to know which cat­e­gori­sa­tions are the ones we ought to get rid of, and which are the ones we ought to keep? Se­condly, why is it that some cat­e­gori­sa­tions ought to be got rid of, and oth­ers ought not be?

So far as I can see, the ar­ti­cle does not at­tempt to di­rectly an­swer the first ques­tion (cor­rect me if I am mis­taken). The ar­ti­cle does seem to try and an­swer the sec­ond ques­tion through some kind of Essen­tial­ism; that ‘Blegg’ and ‘Rube’ don’t pick out real “kinds”, whilst the other cat­e­gori­sa­tions do. Is this the cor­rect read­ing of the ar­ti­cle? And how ex­actly would that type of Essen­tial­ism pan out?

• I per­son­ally pre­fer names to be self-ex­plana­tory. There­fore, in this ex­am­ple I would con­sider a “blegg” to be a blue egg, re­gard­less of its other qual­ities, and a “rube” to be a red cube, re­gard­less of its other qual­ities. I sus­pect many other peo­ple would have a similar in­tu­ition.

• Most of this is about word-as­so­ci­a­tion, mul­ti­ple defi­ni­tions of wor­lds, or not enough words to de­scribe the situ­a­tion.

In this case, a far more com­pli­cated Net­work setup would be re­quired to de­scribe the neu­ral ac­tivity. Not only would you need the Net­work you have, but you would also need a sec­ond (or in­ter­me­di­ate) net­work con­nect­ing sen­sory per­cep­tions with cer­tain words, and then yet an­other (or ex­tended) net­work con­nect­ing those words with mem­ory and cog­ni­tive as­so­ci­a­tions with those words in the past. You could go on and on, by then also in­clud­ing the other words linked to those cog­ni­tive as­so­ci­a­tions (and then the words as­so­ci­ated with those, etc., etc.) In truth, even then, it would prob­a­bly a far-more sim­plis­tic and less-con­nected view than what is truly oc­cur­ing in the brain.

What is oc­cur­ing (90% of the time) with the “Tree ar­gu­ment” is mul­ti­ple defi­ni­tions (and as­so­ci­a­tions) for one word. For in­stance, let’s say ‘quot’ was a well-known English word for ac­cous­tic vibra­tions. Be­ing a sin­gle word, with no other defi­ni­tions, no one would ever (even when think­ing) mis­take it with the sub­jec­tive ex­pe­rience of sound. Peo­ple wouldn’t ask ‘If a tree falls, when no one is there, does it make a quot’, be­cause ev­ery­one would in­stantly as­so­ci­ate the word ‘quot’ with the vibra­tions that must be made, and can be proven to ex­ist, with or with­out peo­ple to listen to them (un­less you are one of the few who claim the vibra­tions (or quots) do not ex­ist, ei­ther). Peo­ple also, then, would not ask if the tree made a sound, ei­ther, be­cause they would in­stantly link the word ‘sound’ with the sub­jec­tive ex­pe­rience, as the word would have no com­pet­ing defi­ni­tion any longer (un­less you are some­one who claims the sub­jec­tive ex­pe­rience of sound would still ex­ist, even with­out a per­son [I’ve never met such a per­son, but chances are, they’re out there]).

As for the ques­tion of whether or not it is a blegg, this is ex­am­ple is mostly true to what your say­ing, though word-as­so­ci­ate for the col­ors ‘blue’ and ‘red’ would also play a role. The word ‘Blegg’ has three of the let­ters ‘blue’ has, and thus peo­ple would prob­a­bly be in­clined to call some­thing that looks blue a ‘blegg’ when given the choice. As for a ‘Rube’, this word has three let­ters and would be si­mil­iar in pro­noun­ci­a­tion to ‘Ruby’. This, also, would cause peo­ple to be more likely say some­thing is a ‘Rube’ if it is red, rather than if it was blue.

As for the ques­tion of Pluto be­ing a planet (be­sides cul­tural bias by peo­ple who grew-up call­ing it one), the ar­gu­ment lies in not enough peo­ple know­ing the true defi­ni­tion (or else no set defi­ni­tion) of the word. From my un­der­stand­ing, planets are defined as things big-enough to move a cer­tain amount of other things around it in space. The ev­i­dence long-ago showed that Pluto could do this, so it was called a planet. But now, the ev­i­dence says that Pluto can­not do this, so it is not a planet. If peo­ple asked ‘Is Pluto big-enough to move things?‘, the de­bate (if you could call it that) would be much differ­ent. Peo­ple have known Pluto isn’t a ‘planet’ for years, but only when they dis­cov­ered the dwarf planet ‘Eris’ did they de­cide Pluto would have to go, or else books would soon be say­ing our So­lar Sys­tem had eleven planets (two of which ac­tu­ally be­ing dwarf ones).

All of that be­ing said, I en­joyed your writ­ing very much, and agreed with much of it.

• Silas,

The key­words you need are “Hopfield net­work” and “Heb­bian learn­ing”. MacKay’s book has a sec­tion on them, start­ing on page 505.

• Silas, billswift, Eliezer does say, in­tro­duc­ing his di­a­grams in the Neu­ral Cat­e­gories post : “Then I might de­sign a neu­ral net­work that looks some­thing like this:”

• If a tree falls in a for­est, but there’s no­body there to hear it, does it make a sound? Yes, but if there’s no­body there to hear it, it goes “AAAAAAh.”

• Again, very in­ter­est­ing. A mind com­posed of type 1 neu­ral net­works looks as though it wouldn’t in fact be able to do any cat­e­goris­ing, so wouldn’t be able to do any pre­dict­ing, so would in fact be pretty dumb and lead a very Hobbe­sian life....

• A neu­ron can re­peat­edly fire at 10Hz. A nerve sig­nal can travel 1m in 0.001s. A com­puter go­ing at 14,000,000Hz or 400,000,000Hz with ir­reg­u­lar timing in the sig­nal...? Only a nig­ger has a prob­lem with this.

• Are vibra­tions in the air that no­body hears, sound? That’s the ques­tion.

It’s not, cu­ri­ously, a mat­ter of defi­ni­tion.

See Stan­ley Cavell’s dis­cus­sion of what is a chair in The Claim of Rea­son p.71

Wittgen­stein goes a lit­tle deeper than is imag­ined.