# Against Modal Logics

Con­tinu­a­tion of: Grasp­ing Slip­pery Things
Fol­lowup to: Pos­si­bil­ity and Could-ness, Three Fal­la­cies of Teleology

When I try to hit a re­duc­tion prob­lem, what usu­ally hap­pens is that I “bounce”—that’s what I call it. There’s an al­most tan­gible feel to the failure, once you ab­stract and gen­er­al­ize and rec­og­nize it. Look­ing back, it seems that I man­aged to say most of what I had in mind for to­day’s post, in “Grasp­ing Slip­pery Things”. The “bounce” is when you try to an­a­lyze a word like could, or a no­tion like pos­si­bil­ity, and end up say­ing, “The set of re­al­iz­able wor­lds [A’] that fol­lows from an ini­tial start­ing world A op­er­ated on by a set of phys­i­cal laws f.” Where re­al­iz­able con­tains the full mys­tery of “pos­si­ble”—but you’ve made it into a ba­sic sym­bol, and added some other sym­bols: the illu­sion of for­mal­ity.

There are a num­ber of rea­sons why I feel that mod­ern philos­o­phy, even an­a­lytic philos­o­phy, has gone astray—so far astray that I sim­ply can’t make use of their years and years of ded­i­cated work, even when they would seem to be ask­ing ques­tions closely akin to mine.

The pro­lifer­a­tion of modal log­ics in philos­o­phy is a good illus­tra­tion of one ma­jor rea­son: Modern philos­o­phy doesn’t en­force re­duc­tion­ism, or even strive for it.

Most philoso­phers, as one would ex­pect from Stur­geon’s Law, are not very good. Which means that they’re not even close to the level of com­pe­tence it takes to an­a­lyze men­tal­is­tic black boxes into cog­ni­tive al­gorithms. Re­duc­tion­ism is, in mod­ern times, an un­usual tal­ent. In­sights on the or­der of Pearl et. al.’s re­duc­tion of causal­ity or Ju­lian Bar­bour’s re­duc­tion of time are rare.

So what these philoso­phers do in­stead, is “bounce” off the prob­lem into a new modal logic: A logic with sym­bols that em­body the mys­te­ri­ous, opaque, un­opened black box. A logic with prim­i­tives like “pos­si­ble” or “nec­es­sary”, to mark the places where the philoso­pher’s brain makes an in­ter­nal func­tion call to cog­ni­tive al­gorithms as yet un­known.

And then they pub­lish it and say, “Look at how pre­cisely I have defined my lan­guage!”

In the Wittgen­stei­nian era, philos­o­phy has been about lan­guage—about try­ing to give pre­cise mean­ing to terms.

The kind of work that I try to do is not about lan­guage. It is about re­duc­ing men­tal­is­tic mod­els to purely causal mod­els, about open­ing up black boxes to find com­pli­cated al­gorithms in­side, about dis­solv­ing mys­ter­ies—in a word, about cog­ni­tive sci­ence.

That’s what I think post-Wittgen­stei­nian philos­o­phy should be about—cog­ni­tive sci­ence.

But this kind of re­duc­tion­ism is hard work. Ideally, you’re look­ing for in­sights on the or­der of Ju­lian Bar­bour’s Machi­anism, to re­duce time to non-time; in­sights on the or­der of Judea Pearl’s con­di­tional in­de­pen­dence, to give a math­e­mat­i­cal struc­ture to causal­ity that isn’t just find­ing a new way to say “be­cause”; in­sights on the or­der of Bayesi­anism, to show that there is a unique struc­ture to un­cer­tainty ex­pressed quan­ti­ta­tively.

Just to make it clear that I’m not claiming a mag­i­cal and unique abil­ity, I would name Gary Drescher’s Good and Real as an ex­am­ple of a philo­soph­i­cal work that is com­men­su­rate with the kind of think­ing I have to try to do. Gary Drescher is an AI re­searcher turned philoso­pher, which may ex­plain why he un­der­stands the art of ask­ing, not What does this term mean?, but What cog­ni­tive al­gorithm, as seen from the in­side, would gen­er­ate this ap­par­ent mys­tery?

(I paused while read­ing the first chap­ter of G&R. It was im­me­di­ately ap­par­ent that Drescher was think­ing along lines so close to my­self, that I wanted to write up my own in­de­pen­dent com­po­nent be­fore look­ing at his—I didn’t want his way of phras­ing things to take over my writ­ing. Now that I’m done with zom­bies and metaethics, G&R is next up on my read­ing list.)

Con­sider the pop­u­lar philo­soph­i­cal no­tion of “pos­si­ble wor­lds”. Have you ever seen a pos­si­ble world? Is an elec­tron ei­ther “pos­si­ble” or “nec­es­sary”?Clearly, if you are talk­ing about “pos­si­bil­ity” and “ne­ces­sity”, you are talk­ing about things that are not com­men­su­rate with elec­trons - which means that you’re still deal­ing with a world as seen from the in­ner sur­face of a cog­ni­tive al­gorithm, a world of sur­face lev­ers with all the un­der­ly­ing ma­chin­ery hid­den.

I have to make an AI out of elec­trons, in this one ac­tual world. I can’t make the AI out of pos­si­bil­ity-stuff, be­cause I can’t or­der a pos­si­ble tran­sis­tor. If the AI ever thinks about pos­si­bil­ity, it’s not go­ing to be be­cause the AI no­ticed a pos­si­ble world in its closet. It’s go­ing to be be­cause the non-on­tolog­i­cally-fun­da­men­tal con­struct of “pos­si­bil­ity” turns out to play a use­ful role in mod­el­ing and ma­nipu­lat­ing the one real world, a world that does not con­tain any fun­da­men­tally pos­si­ble things. Which is to say that al­gorithms which make use of a “pos­si­bil­ity” la­bel, ap­plied at cer­tain points, will turn out to cap­ture an ex­ploitable reg­u­lar­ity of the one real world. This is the kind of knowl­edge that Judea Pearl writes about. This is the kind of knowl­edge that AI re­searchers need. It is not the kind of knowl­edge that mod­ern philos­o­phy holds it­self to the stan­dard of hav­ing gen­er­ated, be­fore a philoso­pher gets credit for hav­ing writ­ten a pa­per.

Philoso­phers keep tel­ling me that I should look at philos­o­phy. I have, ev­ery now and then. But the main rea­son I look at philos­o­phy is when I find it de­sir­able to ex­plain things to philoso­phers. The work that has been done—the prod­ucts of these decades of mod­ern de­bate—is, by and large, just not com­men­su­rate with the kind of anal­y­sis AI needs. I feel a bit awful about say­ing this, be­cause it feels like I’m tel­ling philoso­phers that their life’s work has been a waste of time—not that pro­fes­sional philoso­phers would be likely to re­gard me as an au­thor­ity on whose life has been a waste of time. But if there’s any cen­tral­ized repos­i­tory of re­duc­tion­ist-grade nat­u­ral­is­tic cog­ni­tive philos­o­phy, I’ve never heard men­tion of it.

And: Philos­o­phy is just not ori­ented to the out­look of some­one who needs to re­solve the is­sue, im­ple­ment the cor­re­spond­ing solu­tion, and then find out—pos­si­bly fatally—whether they got it right or wrong. Philos­o­phy doesn’t re­solve things, it com­piles po­si­tions and ar­gu­ments. And if the de­bate about zom­bies is still con­sid­ered open, then I’m sorry, but as Jeffreys­sai says: Too slow! It would be one mat­ter if I could just look up the stan­dard an­swer and find that, lo and be­hold, it is cor­rect. But philos­o­phy, which hasn’t come to con­clu­sions and moved on from cog­ni­tive re­duc­tions that I re­gard as rel­a­tively sim­ple, doesn’t seem very likely to build com­plex cor­rect struc­tures of con­clu­sions.

Sorry—but philos­o­phy, even the bet­ter grade of mod­ern an­a­lytic philos­o­phy, doesn’t seem to end up com­men­su­rate with what I need, ex­cept by ac­ci­dent or by ex­traor­di­nary com­pe­tence. Parfit comes to mind; and I haven’t read much Den­nett, but Den­nett does seem to be try­ing to do the same sort of thing that I try to do; and of course there’s Gary Drescher. If there was a repos­i­tory of philo­soph­i­cal work along those lines—not con­cerned with defend­ing ba­sic ideas like anti-zom­bieism, but with ac­cept­ing those ba­sic ideas and mov­ing on to challenge more difficult quests of nat­u­ral­ism and cog­ni­tive re­duc­tion­ism—then that, I might well be in­ter­ested in read­ing. But I don’t know who, be­sides a few heroes, would be able to com­pile such a repos­i­tory—who else would see a modal logic as an ob­vi­ous bounce-off-the-mys­tery.

• @ EY: I feel a bit awful about say­ing this, be­cause it feels like I’m tel­ling philoso­phers that their life’s work has been a waste of time

Well, your buddy Robin Han­son has proved math­e­mat­i­cally that my life has been a waste of time in his Doc­tors kill se­ries of posts. I ac­cept the num­bers. Screw the philoso­phers; now it’s their turn. It’s all chem­i­cal neu­ro­trans­mit­ters. Next: the lawyers.

• Eliezer, I don’t think your com­ments would slight sen­si­ble philoso­phers, since many pro­fes­sional philoso­phers them­selves make com­pa­rable or more bit­ing crit­i­cisms about the dis­ci­pline (Rorty, Den­nett, Unger, now the ex­per­i­men­tal philos­o­phy move­ment, et al., go­ing back to the pos­i­tivists, and, if you like, the Pyrrhon­ists and atom­ists). I’m afraid not only have philoso­phers already writ­ten ex­ten­sively on meta-ethics, but they’ve also gen­er­ated an ex­ten­sive liter­a­ture on anti-philos­o­phy. They’ve been there, done that—too! I think Tyrell McAllister is quite right to say that since philos­o­phy largely con­sists of folks who can’t agree on the most work­able mod­els, your func­tional in­ter­ests will tend to be frus­trated by philos­o­phy. Like your es­timable hero Dick Feyn­man (who, ac­cord­ing to Len Mlod­inow, averred that “philos­o­phy is bul­lshit”), it’d be bet­ter for you sim­ply to get on with your tasks at hand, and not ex­pect much help from philos­o­phy—to find the worth­while stuff you’d have to be­come one. Maybe you can do that af­ter the FAI builds you an im­mor­tal cor­po­real form.

• Well of course one stan­dard re­sponse to such com­plaints is: “If you think you can do bet­ter, show us.” Not just bet­ter in a one-off way, but a bet­ter tra­di­tion that could con­tinue it­self. If you think you have done bet­ter and are be­ing un­fairly ig­nored, well then that is a differ­ent con­ver­sa­tion.

• Alright, I am go­ing to bite on this.

E writes: “The pro­lifer­a­tion of modal log­ics in philos­o­phy is a good illus­tra­tion of one ma­jor rea­son: Modern philos­o­phy doesn’t en­force re­duc­tion­ism, or even strive for it.”

The usual jus­tifi­ca­tion for skep­ti­cism about re­duc­tion­ism as a method­ol­ogy had to do with the sta­tus of the bridge laws: those an­a­lytic de­vices which re­duced A to B, whether A was a set of sen­tences, ob­ser­va­tions, etc. Like climb­ing the lad­ders in the Trac­ta­tus, they seemed to have no pur­pose, once used.

They weren’t part of the re­duc­tive lan­guage, yet the they were nec­es­sary for the re­duc­tive pro­ject.

Car­nap was prob­a­bly the last philoso­pher to try for a sys­temic re­duc­tion, and his at­tempts floundered on well known prob­lems, circa 1940.

E writes: “Con­sider the pop­u­lar philo­soph­i­cal no­tion of “pos­si­ble wor­lds”. Have you ever seen a pos­si­ble world? Is an elec­tron ei­ther “pos­si­ble” or “nec­es­sary”?”

Kripke’s es­say on pos­si­ble wor­lds makes it clear that there is noth­ing mys­te­ri­ous about pos­si­ble wor­lds, they are sim­ply states of in­for­ma­tion. Noth­ing hard.

E writes: ” If there was a repos­i­tory of philo­soph­i­cal work along those lines—not con­cerned with defend­ing ba­sic ideas like anti-zom­bieism, but with ac­cept­ing those ba­sic ideas and mov­ing on to challenge more difficult quests of nat­u­ral­ism and cog­ni­tive re­duc­tion­ism—then that, I might well be in­ter­ested in read­ing.”

Pro­fes­sional philoso­phers are not sci­en­tists, but rather keep al­ive un­fash­ion­able ar­gu­ments that sci­en­tists and tech­ni­ci­ans wrongly be­lieve have been “solved”, as op­posed to ig­nored.

You are not suited for philo­soph­i­cal ab­strac­tion be­cause you pri­mar­ily want to build some­thing. Get on with it, then and stop talk­ing about foun­da­tions -which may not ex­ist. Just do it.

• Well of course one stan­dard re­sponse to such com­plaints is: “If you think you can do bet­ter, show us.” Not just bet­ter in a one-off way, but a bet­ter tra­di­tion that could con­tinue it­self.
Can do? It’s already been done, long ago—we call it ‘sci­ence’.

Do not con­fuse tech­ni­ci­ans and stylists with those that ap­ply the sci­en­tific method. Among those that do, some of the great­est of them made greater ‘philo­soph­i­cal’ progress while work­ing and writ­ing on mat­ters only tan­gen­tially re­lated to their nom­i­nal fields than countless gen­er­a­tions of so-called philoso­phers who sup­pos­edly ded­i­cated them­selves to the is­sues.

Even an am­a­teur sci­en­tist can quickly de­velop work­ing re­s­olu­tions to ques­tions that philos­o­phy has held up as eter­nal.

By this point, even an ex­traor­di­nar­ily-un­ob­ser­vant thinker should have re­al­ized that philos­o­phy isn’t about find­ing the an­swer to ques­tions—it’s about pos­tur­ing as profound while mouthing ques­tions, then talk­ing with oth­ers to mu­tu­ally demon­strate the in­tel­lec­tual im­por­tance of the topic and thus those that dis­cuss it. It’s a form of sta­tus-mas­tur­ba­tion.

• What on Earth is evolu­tion, if not the keep­ing of DNA se­quences that worked last time? It’s less effi­cient than hu­man in­duc­tion and stupi­der, be­cause it works only with DNA strings and is in­ca­pable of notic­ing sim­pler and more fun­da­men­tal gen­er­al­iza­tions like physics equa­tions. But of course it’s a crude form of in­duc­tive op­ti­miza­tion. What else would it be? There are no knowl­edge-gen­er­at­ing pro­cesses with­out some equiv­a­lent of an in­duc­tive prior or an as­sump­tion of reg­u­lar­ity. The maths es­tab­lish­ing this of­ten go un­der the name of No-Free-Lunch the­o­rems.

• Evolu­tion does not in­crease a species’ im­plicit knowl­edge of the niche by repli­cat­ing genes. Mu­ta­tion (evolu­tion’s con­jec­tures) cre­ates po­ten­tial new knowl­edge of the niche. Selec­tion de­creases the “false” im­plicit con­jec­tures of mu­ta­tions and pre­vi­ous ge­netic mod­els of the niche.

So in­duc­tion does not in­crease the im­plicit knowl­edge of gene se­quences.
Trial (mu­ta­tion) and er­ror (falsifi­ca­tion) of im­plicit the­o­ries does. This is the pro­cess that the crit­i­cal ra­tio­nal­ist says hap­pens but more effi­ciently with hu­mans.

• Mo­dal logic doesn’t tell you if some sen­tence is pos­si­ble or nec­es­sary, it tells you what sen­tences must have what modal val­ues given some other sen­tences with some pre­speci­fied modal val­ues. Just like Ko­mol­gorov doesn’t tell you that the prob­a­bil­ity of a die land­ing on any face is 16, and that it can’t land on two val­ues, it just tells you that given that, the prob­a­bil­ity of the die land­ing on an even value is 12.

Ko­mol­gorov and Bayes seem to me to be guilty of the same sort of bounc­ing, but i think Bayes and Ko­mol­gorov are clearly use­ful tools for the study of ra­tio­nal­ity. Mo­dal logic does not define pos­si­bil­ity, and it cer­tainly does not re­duce the no­tion of modal­ity to any­thing, but it does con­strain the as­sign­ing of modal val­ues to fields of sen­tences. Any philoso­pher that ar­gued oth­er­wise is pro­lly a noob.

But, in gen­eral I agree with you. I am a philoso­pher, or at least that’s my ma­jor, and i agree that: It is only by ex­traor­di­nary com­pe­tence that philoso­phers ever pro­duce use­ful re­duc­tions; that’s some­thing I hope to change by go­ing into the field. And btw, I plan on us­ing your work all the time to help me make that hap­pen. So would it bother you, or seem strange, if i called you a philoso­pher, Eliezer? Cause I hon­estly say that your one of my fa­vorite philoso­phers, if not my fa­vorite, of­ten enough, and i would find it funny if my fa­vorite philoso­pher, didn’t even con­sider him­self a philoso­pher at all, and wasn’t all that in­ti­mate with the liter­a­ture. It’s a fact I’d like to know for per­sonal amuse­ment.

• Kripke’s es­say on pos­si­ble wor­lds makes it clear that there is noth­ing mys­te­ri­ous about pos­si­ble wor­lds, they are sim­ply states of in­for­ma­tion. Noth­ing hard.

Good for Kripke, then. I’ve of­ten found that the ma­jor peo­ple in a field re­ally do de­serve their rep­u­ta­tions, and I haven’t as­serted that good philos­o­phy is im­pos­si­ble, just that the field has failed to sys­tem­atize it enough to make it worth­while read­ing.

How­ever, you do not solve an AI prob­lem by call­ing some­thing a “state of in­for­ma­tion”. Given that there’s only one real world, how are these “pos­si­ble wor­lds” for­mu­lated as cog­ni­tive rep­re­sen­ta­tions? I can’t write an AI un­til I know this.

How­ever, can you give me an im­me­di­ate and agreed-upon an­swer to the ques­tion, “Is there a pos­si­ble world where zom­bies ex­ist?” Con­sid­er­ing the ques­tions that fol­low from that, will make you re­al­ize how lit­tle of the struc­ture of the “pos­si­ble wor­lds” con­cept fol­lows just from say­ing, “it is a state of in­for­ma­tion”.

Did Kripke mark his work as un­finished for failing to an­swer such ques­tions? Or did he ac­tu­ally try to an­swer them? Now that would earn se­ri­ous re­spect from me, and I might go out and start look­ing through Kripke’s stuff.

Robin: Well of course one stan­dard re­sponse to such com­plaints is: “If you think you can do bet­ter, show us.” Not just bet­ter in a one-off way, but a bet­ter tra­di­tion that could con­tinue it­self. If you think you have done bet­ter and are be­ing un­fairly ig­nored, well then that is a differ­ent con­ver­sa­tion.

Robin, my re­sponse here is mainly to philoso­phers who say, “We did all this work on metaethics, why are you ig­nor­ing us?” and my an­swer is: “The work you did is in­com­men­su­rable with even the kind of philos­o­phy that an AI re­searcher needs, which is cog­ni­tive philos­o­phy and the re­duc­tion of men­tal­is­tic think­ing to the non-men­tal; go read Gary Drescher for an ex­am­ple of the kind of men­tal la­bor I’m talk­ing about. Some of you may have done such work, but that’s no help to me if I have to wade through all of philos­o­phy to find it. Even your com­pila­tions of ar­gu­ments are lit­tle help to me in ac­tu­ally solv­ing AI prob­lems, though when I need to ex­plain some­thing I will of­ten check the Stan­ford En­cy­clo­pe­dia of Philos­o­phy to see what the stan­dard ar­gu­ments are. And I fi­nally ob­serve that if you, as a philoso­pher, have not gone out and stud­ied cog­ni­tive sci­ence and AI, then you re­ally have no right to com­plain about peo­ple ‘ig­nor­ing rele­vant re­search’, and more im­por­tantly, you have no idea what I’m look­ing for.” This is my re­sponse to the philoso­phers who feel slighted by my trav­els through what they feel should be their ter­ri­tory, with­out much ac­knowl­edg­ment.

How­ever, with all that said—if I was try­ing to build a tra­di­tion that would con­tinue it­self, these posts on Over­com­ing Bias would form a large part of how I did it, though I would be much more in­ter­ested in mak­ing them sound more im­pres­sive (which in­cludes for­mal­iz­ing/​declar­ify­ing their con­tents and pub­lish­ing them in jour­nals) and I would as­sign a higher pri­or­ity to e.g. writ­ing up my time­less de­ci­sion the­ory.

• Philos­o­phy is just not ori­ented to the out­look of some­one who needs to re­solve the is­sue, im­ple­ment the cor­re­spond­ing solu­tion, and then find out—pos­si­bly fatally—whether they got it right or wrong. Philos­o­phy doesn’t re­solve things, it com­piles po­si­tions and ar­gu­ments.

This would be why I never finished that philos­o­phy de­gree. Aca­demic philos­o­phy does not seem par­tic­u­larly in­ter­ested in solv­ing the world’s prob­lems. Tyrrell McAllister has a good point on the value of pro­vid­ing a way of dis­cussing things, but if there is not even in prin­ci­ple a way of de­cid­ing what would con­sti­tute filling the black box, the dis­ci­pline will keep jug­gling the boxes.

There must be some merit in games of lan­guage and logic, but they re­main that: games. Su­doku and World of War­craft are similarly struc­tured games, and you could ar­gue se­ri­ously about whether an is­sue of Games Magaz­ine im­proves the world more or less than any schol­arly jour­nal J. men­tioned.

That said, start­ing with Stur­geon’s Law, we already knew the ma­jor­ity was waste pa­per. What is your prob­a­bil­ity that the good 10% is not worth the search cost to find it?

As a meta-Over­com­ing Bias com­ment, I think this post is nec­es­sary for Eliezer. When he dis­cusses philo­soph­i­cal is­sues, there are a half-dozen of us who cite a hun­dred-year his­tory of work on, for ex­am­ple, meta-ethics. I must in­ter­pret this post as a case for ra­tio­nal ig­no­rance, “I am not go­ing to read all that be­cause it is ob­vi­ously waste pa­per,” as op­posed to “I am fa­mil­iar with that but I have re­jected it” (or the lat­ter with very small val­ues of “fa­mil­iar”). So this is one of those one-link re­sponses.

We can med­i­tate on whether it re­solves the is­sue rather than giv­ing a feel­ing of re­s­olu­tion. With re­spect to philos­o­phy, I of­ten find sur­pris­ingly lit­tle progress since Hume (on ques­tions of in­ter­est to me). When an OB post ar­rives at a stan­dard ar­gu­ment, maybe via a differ­ent door, I ex­pect it to be able to en­gage stan­dard cri­tiques. “All stan­dard cri­tiques are mean­ingless black box jug­gling ex­er­cises un­til proven oth­er­wise” is per­haps a vi­able heuris­tic, but it feels con­ve­nient.

This also feels a bit like the “out­side view” Eliezer crit­i­cizes Robin for us­ing to make pre­dic­tions.

• Philos­o­phy is just not ori­ented to the out­look of some­one who needs to re­solve the is­sue, im­ple­ment the cor­re­spond­ing solu­tion, and then find out—pos­si­bly fatally—whether they got it right or wrong. Philos­o­phy doesn’t re­solve things, it com­piles po­si­tions and ar­gu­ments. And if the de­bate about zom­bies is still con­sid­ered open, then I’m sorry, but as Jeffreys­sai says: Too slow!

Still, I hope your Friendli­ness struc­ture can cope with the case where zom­bies are pos­si­ble. Well, I guess that one wouldn’t make any differ­ence—so I should say I hope you’re also try­ing to min­i­mize the num­ber of philo­soph­i­cal prob­lems you have to be right about.

• Are your feel­ings only con­fined to philos­o­phy, mod­ern or oth­er­wise? I feel the same sense of ‘modal logic’ ev­ery­where – art, poli­tics, even tech­nol­ogy – con­ver­sa­tions, ar­gu­ments, and dis­cus­sions seem end­lessly dis­con­nected, re­lated lan­guages speak­ing past one an­other.

I think Tyrrell nails it – philos­o­phy mainly pro­vides com­mon vo­cab­u­laries. And I must agree with him – it is no mean feat.

I highly recom­mend the var­i­ous works of Daniel Den­nett – hav­ing read him be­fore read­ing you, I feel pre­pared for ex­actly your fa­vored type of ar­gu­ment – dis­solv­ing con­fu­sion by re­ject­ing false di­chotomies and rigor­ously sep­a­rat­ing lay­ers.

The uni­verse is end­lessly amaz­ing, and I feel blessed by be­ing so cu­ri­ous. I think it’s mirac­u­lous that philoso­pher’s are as good as they are!

• You write that “Philos­o­phy doesn’t re­solve things, it com­piles po­si­tions and ar­gu­ments”. I think that philos­o­phy should be granted as pro­vid­ing some­thing some­what more pos­i­tive than this: It pro­vides com­mon vo­cab­u­laries for ar­gu­ments. This is no mean feat, as I think you would grant, but it is far short of re­solv­ing ar­gu­ments which is what you need.

As you’ve ob­served, modal log­ics amount to ar­rang­ing a bunch of black boxes in very pre­cisely stipu­lated con­figu­ra­tions, while giv­ing no in­di­ca­tion as to the ac­tual con­tents of the black boxes. How­ever, if you mean to ac­cuse the philoso­phers of see­ing no need to fill the black boxes, then I think you go too far. Rather, it is just an an­thro­polog­i­cal fact that the philoso­phers can­not agree on how to fill the black boxes, or even on what con­sti­tutes filling a box. The re­sult is that they are un­able to gen­er­ate a con­sen­sus at the level of pre­ci­sion that you need. Nonethe­less, they at least gen­er­ate a con­sen­sus vo­cab­u­lary for dis­cussing var­i­ous can­di­date re­fine­ments down to some level, even if none of them reach as deep a level as you need.

I don’t mean to con­tra­dict your as­ser­tion that (even) an­a­lytic philos­o­phy doesn’t provide what you need. I mean rather to em­pha­size what the prob­lem is: It isn’t ex­actly that peo­ple fail to see the need for re­duc­tion­is­tic ex­pla­na­tions. Rather the prob­lem is that no one seems ca­pa­ble of con­vinc­ing any­one else that his or her can­di­date re­duc­tion should be ac­cepted to the ex­clu­sion of all oth­ers. It may be that the only way for some­one to win this kind of ar­gu­ment is to build an ac­tual func­tion­ing AI. In fact, I’m in­clined to think that this is the case. If so, then, in my ir­rele­vant judge­ment, you are work­ing with just about the right amount of dis­re­gard for what­ever con­sen­sus re­sults might ex­ist with the an­a­lytic philo­soph­i­cal tra­di­tion.

• Re­duc­tion­ism is, in mod­ern times, an un­usual tal­ent. In­sights on the or­der of Pearl et. al.’s re­duc­tion of causal­ity or Ju­lian Bar­bour’s re­duc­tion of time are rare.

Ye-e-e-s. But it is not at all clear whether Bar­bours re­duc­tion works. (See Fay Dowker;s crit­i­cisms in the ap­pen­dices, for in­stance). It’s not a re­duc­tion in the sense that “heat is molec­u­lar mo­tion” is a uni­ver­sally ac­cepted, suc­ces­ful re­duc­tion.

• Ask­ing “is this re­duc­tive” and noth­ing else is not a good way to do philos­o­phy.

• Jaynes quoted a col­league: “Philoso­phers are free to do what­ever they please, be­cause they don’t have to do any­thing right.”

Philoso­phers lack the feed­back loop from re­al­ity that an en­g­ineer try­ing to build a mind has. Most of the heated philo­soph­i­cal squawk­ing about minds will be ren­dered ir­rele­vant once we start build­ing them.

One of the rea­sons Den­nett usu­ally makes sense is he tries to know the sci­ence in­volved.

Just the other day I was watch­ing Den­nett: http://​​www.youtube.com/​​watch?v=2hBQCBpyu74&fea­ture=g-hist

At around 6:00, he’s say­ing how he sees the job of philoso­phers as match­ing up the man­i­fest image of the world with the sci­en­tific image of the world. I think that kind of philos­o­phy will always be needed.

• If a modal logic can hide a mys­tery in­side a black box, and ev­ery­thing out­side the black box be­haves con­sis­tently, that would be an in­cred­ibly use­ful achieve­ment. You would have iso­lated the mys­tery.

• Our knowl­edge of evolu­tion­ary the­ory de­pends on in­duc­tion. Without in­duc­tion, you can’t es­tab­lish the unifor­mity of na­ture. You have no grounds for be­liev­ing that what hap­pened yes­ter­day is any guide to what may hap­pen to­mor­row. Without in­duc­tion, sci­ence is to­tally screwed. Pop­per’s episte­mol­ogy was not sci­ence with­out in­duc­tion:

Wesley C. Sal­mon cri­tiques Pop­per’s falsifi­a­bil­ity by ar­gu­ing that in us­ing cor­rob­o­rated the­o­ries, in­duc­tion is be­ing used. Sal­mon stated, “Mo­dus tol­lens with­out cor­rob­o­ra­tion is empty; modus tol­lens with cor­rob­o­ra­tion is in­duc­tion.”
• I do not rate Pop­per’s con­tri­bu­tions in this area very highly—e.g. see here.

Science with­out in­duc­tion is a com­plete joke. Pop­per didn’t elimi­nate in­duc­tion, he just swept it un­der a con­sen­sual rug.

• Eliezer, I recom­mend you to read Den­nett’s “Ar­tifi­cial In­tel­li­gence as Psy­chol­ogy and as Philos­o­phy”, in his col­lec­tion of es­says Brain­storms. It may be a bit dated, but it makes a very nice case for a di­vi­sion of ter­ri­tory be­tween AI, Psy and Phi, and how each of them can help the oth­ers.

• Also, could you give an ex­am­ple of a philo­soph­i­cal prob­lem that sci­ence has solved.

Con­sid­er­ing that sci­ence de­vel­oped out of a style of philo­soph­i­cal thought called ‘nat­u­ral philos­o­phy’, ev­ery ques­tion sci­ence has ad­dressed has been a philo­soph­i­cal one.

The real prob­lem is that when ac­tual progress is made on a ‘philo­soph­i­cal’ ques­tion, we as­so­ci­ate it with the branch of sci­ence that made the progress. Tur­ing and Godel were math­e­mat­i­ci­ans, Schroed­inger was a physi­cist (and one of his most im­pres­sive in­sights was in the in­ter­sec­tion of biol­ogy and in­for­ma­tion the­ory), Fermi a physi­cist, etc.

The only things that re­main in the cat­e­gory of philos­o­phy are those that are ut­terly use­less and fail to ex­pand our un­der­stand­ing of any as­pect of the world. It’s a sim­ple se­lec­tion effect—the gold is sifted out while the dross re­mains.

Tur­ing alone re­solved more ques­tions that were tra­di­tion­ally con­sid­ered to be within with bounds of ‘philos­o­phy’ as you re­fer to it than any­one I can think of off­hand.

• Re: It con­tains co­gent crit­i­cism of in­duc­tivism and crypto-in­duc­tivism and one who un­der­stands these crit­i­cisms should see why Bayescraft is ster­ile.

Uh, surely that’s not the cor­rect moral. It’s like ar­gu­ing that physics is ster­ile be­cause of solip­sism.

• “it feels like I’m tel­ling philoso­phers that their life’s work has been a waste of time.”

If my im­me­di­ate in­ter­est is to trig­ger a sub­ject’s sal­iva re­flex, it would be a much bet­ter use of my time to vividly de­scribe to the sub­ject the sen­sa­tions of bit­ing into a lemon than it would to in­quire af­ter the al­gorithms that give rise to lemony sen­sa­tions.

I am re­duc­tion­ist, but I can’t quite imag­ine an in­tel­lec­tual life that ab­stracted away all con­scious in­ter­est in phe­nomenolog­i­cal struc­ture in fa­vor of mono­ma­ni­a­cal at­ten­tion to the base struc­ture. Then again, there’s no ac­count­ing for taste. (Or is there?)

• J.: Zeno’s Para­dox was solved by math­e­mat­i­ci­ans (hon­or­able mem­bers of the sci­en­tific com­mu­nity even if you think math­e­mat­ics is not part of sci­ence).

• I con­fess that I’m con­fused. Why does the “pro­lifer­a­tion” of modal log­ics im­ply that philoso­phers do not strive for re­duc­tion­ism? Why think that hav­ing sev­eral modal log­ics is a bad thing? Th­ese log­ics were de­vel­oped origi­nally as purely for­mal syn­tac­tic sys­tems with differ­ent sets of ax­ioms. In a sense, de­cry­ing the pro­lifer­a­tion of modal log­ics is akin to de­cry­ing the pro­lifer­a­tion of non-Eu­clidean ge­ome­tries. There were modal log­ics long be­fore philoso­phers ever spoke of pos­si­ble wor­lds, which, un­less you’re one of the few con­vinced by David Lewis, philoso­phers take sim­ply to be a use­ful heuris­tic when speak­ing of pos­si­bil­ity and ne­ces­sity. How can one talk about a purely causal model with some no­tion of ne­ces­sity? That would be a purely causal model with­out any no­tion of causal­ity. It strikes me that even the AI the­o­rist would like to dis­cuss cau­sa­tion, con­sis­tency of mod­els, log­i­cal im­pli­ca­tion, maybe even moral obli­ga­tion. Th­ese are all modal no­tions, but un­for­tu­nately, they’re not log­i­cally equiv­a­lent. We shouldn’t fall into a trap of be­ing re­duc­tion­ists purely for the sake of the re­duc­tion.

• I wrote a bunch of com­ments to this work while dis­cussing with Risto_Saarelma. But I thought I should rather post them here. I came here to dis­cuss cer­tain the­o­ries that are on the bor­der be­tween philos­o­phy and some­thing which could be use­ful for the con­struc­tion of AI. I’ve de­vel­oped my own such the­ory based on many years of work on an un­usual meta­phys­i­cal sys­tem called the Me­ta­physics of Qual­ity, which is largely ig­nored in the academy and de­vi­ates from the tra­di­tion. It’s not very “old” stuff. The for­ma­tion of that tra­di­tion of dis­cus­sion be­gan in 1974. So that’s my back­ground.

The kind of work that I try to do is not about lan­guage. It is about re­duc­ing men­tal­is­tic mod­els to purely causal mod­els, about open­ing up black boxes to find com­pli­cated al­gorithms in­side, about dis­solv­ing mys­ter­ies—in a word, about cog­ni­tive sci­ence.

What would I an­swer to the ques­tion whether my work is about lan­guage? I’d say it’s both about lan­guage and al­gorithms, but it’s not some Chom­sky-style stuff. It does ac­count for the sym­bol ground­ing prob­lem in a way that is not typ­i­cally ex­pected of lan­guage the­ory. But the point is, and I think this is im­por­tant: even the men­tal­is­tic mod­els to not cur­rently ex­ist in a co­her­ent man­ner. So how are peo­ple go­ing to re­duce some­thing un­defined to purely causal mod­els? Well, that doesn’t sound very pos­si­ble, so I’d say the goals of RP are rele­vant.

But this kind of re­duc­tion­ism is hard work.

I would imag­ine main­stream philos­o­phy to be hard work, too. This work, un­for­tu­nately, would, to a great ex­tent, con­sist of mak­ing cor­rect refer­ences to highly illeg­ible works.

Modern philos­o­phy doesn’t en­force re­duc­tion­ism, or even strive for it.

Well… I wouldn’t say RP en­forces re­duc­tion­ism or that it doesn’t en­force re­duc­tion­ism. It kinda ru­ins RP if you de­velop a metathe­ory where the­o­ries are clas­sified ei­ther as re­duc­tion­ist or nonre­duc­tion­ist. You can do that—it’s not a log­i­cal con­tra­dic­tion—but the point of RP is to be such a the­ory, that even though we could con­struct such metathe­o­retic ap­proaches to it, we don’t want to do so, be­cause it’s not only use­less, but also com­pli­cates things for no ap­par­ent benefit. Un­less, of course, we are not in­ter­ested of AI but try­ing to de­vice some very grand philos­o­phy of which I’m not sure what it could be used for. My in­ten­tion is that things like “re­duc­tion­ism” are placed within RP in­stead of plac­ing RP into a box la­beled “re­duc­tion­ism”.

RP is sup­posed to define things re­cur­sively. That is not, to my knowl­edge, im­pos­si­ble. So I’m not sure why the defi­ni­tion would nec­es­sar­ily have to be re­duc­tive in some sense LISP, to my knowl­edge, is not re­duc­tive. But I’m not sure what Eliezer means with “re­duc­tive”. It seems like yet an­other philo­soph­i­cal con­cept. I’d bet­ter check if it’s defined some­where on LW...

And then they pub­lish it and say, “Look at how pre­cisely I have defined my lan­guage!”

I’m not a fetishist. Not in this mat­ter, at least. I want to define things for­mally be­cause the struc­ture of the the­ory is very hard to un­der­stand oth­er­wise. The for­mal defi­ni­tions make it eas­ier to find out things I would not have oth­er­wise no­ticed. That’s why I want to un­der­stand the for­mal defi­ni­tions my­self de­spite some­times hav­ing other peo­ple prac­ti­cally do them for me.

Con­sider the pop­u­lar philo­soph­i­cal no­tion of “pos­si­ble wor­lds”. Have you ever seen a pos­si­ble world?

I think that’s pretty co­gent crit­i­cism. I’ve found the same kind of things trou­ble­some.

Philoso­phers keep tel­ling me that I should look at philos­o­phy. I have, ev­ery now and then. But the main rea­son I look at philos­o­phy is when I find it de­sir­able to ex­plain things to philoso­phers.

I un­der­stand how Eliezer feels. I guess I don’t even tell peo­ple they need to look at philos­o­phy for its own sake. How should I know what some­one else wants to do for its own sake? But it’s not so sim­ple with RP, be­cause it could ac­tu­ally work for some­thing. The good philos­o­phy is sim­ply hard to find, and if I hadn’t stud­ied the MOQ, I might very well now be laugh­ing at Lan­gan’s CTMU with many oth­ers, be­cause I wouldn’t un­der­stand what that thing is he is a bit awk­wardly try­ing to ex­press.

I’d like to illus­trate the stag­na­tion of aca­demic philos­o­phy with the fol­low­ing thought ex­per­i­ment. Let’s sup­pose some­one has solved the prob­lem of in­duc­tion. What is the solu­tion like?

• Ten pages?

• Hun­dred pages?

• Thou­sand pages?

• Does it con­tain no for­mu­lae or few for­mu­lae?

• Does it con­tain a lot of for­mu­lae?

I’ve read aca­demic pub­li­ca­tions to the point that I don’t be­lieve there is any work the aca­demic com­mu­nity would, gen­er­ally speak­ing, re­gard as a solu­tion to the prob­lem of in­duc­tion. I sim­ply don’t be­lieve many schol­ars think there re­ally can be such a thing. They are in­ter­ested of “re­fin­ing” the de­bate some­how. They don’t treat it as some mat­ter that needs to be solved be­cause it ac­tu­ally means some­thing.

This ex­am­ple might not right a bell to some­one com­pletely un­fa­mil­iar with aca­demic philos­o­phy, but I think it does illus­trate how the field is flawed.

• I’d like to illus­trate the stag­na­tion of aca­demic philos­o­phy with the fol­low­ing thought ex­per­i­ment. Let’s sup­pose some­one has solved the prob­lem of in­duc­tion. What is the solu­tion like?

Ten pages? Hun­dred pages? Thou­sand pages? Does it con­tain no for­mu­lae or few for­mu­lae? Does it con­tain a lot of for­mu­lae?

I’ll go with 61 pages and quite a few for­mu­lae.

• This post demon­strates a deep mi­s­un­der­stand­ing of modal log­ics, and of the no­tions of pos­si­bil­ity and ne­ces­sity. one would ex­pect that mi­s­un­der­stand­ing given that Eli can’t re­ally get him­self to read philos­o­phy. For ex­am­ple:

“I have to make an AI out of elec­trons, in this one ac­tual world. I can’t make the AI out of pos­si­bil­ity-stuff, be­cause I can’t or­der a pos­si­ble tran­sis­tor.”

What? What kind of non­sense is this? No con­tem­po­rary philoso­phers would ever say that you can make some­thing out of “pos­si­bil­ity stuff”, what­ever the hell that is is sup­posed to be.

Or this:

“It’s go­ing to be be­cause the non-on­tolog­i­cally-fun­da­men­tal con­struct of “pos­si­bil­ity” turns out to play a use­ful role in mod­el­ing and ma­nipu­lat­ing the one real world, a world that does not con­tain any fun­da­men­tally pos­si­ble things.”

Eli, ev­ery­thing that is ac­tual is triv­ially pos­si­ble, ac­cord­ing to ev­ery sin­gle con­tem­po­rary an­a­lytic philoso­pher. I have no idea what you mean by “fun­da­men­tally pos­si­ble”, but I doubt you mean any­thing use­ful by it. If x ex­ists, then it’s pos­si­ble that x ex­ists. If x is an ac­tual ob­ject, then x is a pos­si­ble ob­ject. If you want, you can treat those claims as ax­ioms. What’s your beef with them? Surely you don’t think, ab­surdly, that if x ac­tu­ally ex­ists then it’s not pos­si­ble that x ex­ists?

One also has to won­der what your beef with mean­ing is. I mean, surely you mean some­thing and mean to com­mu­ni­cate some­thing when you string lots of let­ters to­gether. Is there noth­ing you mean by “re­duc­tion­ism”? If you don’t mean any­thing by us­ing that lin­guis­tic term, then no­body should pay at­ten­tion to you.,

• Eli, ev­ery­thing that is ac­tual is triv­ially pos­si­ble, ac­cord­ing to ev­ery sin­gle con­tem­po­rary an­a­lytic philoso­pher. I have no idea what you mean by “fun­da­men­tally pos­si­ble”, but I doubt you mean any­thing use­ful by it. If x ex­ists, then it’s pos­si­ble that x ex­ists. If x is an ac­tual ob­ject, then x is a pos­si­ble ob­ject. If you want, you can treat those claims as ax­ioms. What’s your beef with them? Surely you don’t think, ab­surdly, that if x ac­tu­ally ex­ists then it’s not pos­si­ble that x ex­ists?

Allow me to at­tempt to trans­late (BTW, that a claim is so ab­surd is ev­i­dence it is not be­ing made. Just sayin’.):

EY is not say­ing that some ac­tual things are not pos­si­ble. He is say­ing that things that are not ac­tual, yet “pos­si­ble”, are ex­actly the same, as far as the uni­verse is con­cerned, as things that are not ac­tual and not “pos­si­ble”. Speci­fi­cally, they are all nonex­is­tent. Hence pos­si­bil­ity is not fun­da­men­tal in any on­tolog­i­cal sense.

The gen­eral gist of the whole post is com­plain­ing that for all their pre­cise logic, the peo­ple who in­vented modal logic have still not un­der­stood pos­si­bil­ity and ne­ces­sity. They for­mal­ized the in­tu­itions about how pos­si­bil­ity and ne­ces­sity work, but didn’t solve what they ac­tu­ally are (which is: la­bels ap­plied by a de­ci­sion-mak­ing al­gorithm).

• He is say­ing that things that are not ac­tual, yet “pos­si­ble”, are ex­actly the same, as far as the uni­verse is con­cerned, as things that are not ac­tual and not “pos­si­ble”. Speci­fi­cally, they are all nonex­is­tent. Hence pos­si­bil­ity is not fun­da­men­tal in any on­tolog­i­cal sense.

But the laws of the uni­verse de­mar­cate pos­si­ble things from im­pos­si­ble things: so can you dis­miss the re­al­ity of pos­si­bil­ities with­out dis­miss­ing the re­al­ity of laws?

• J said: I read this blog for Han­son’s posts, but un­for­tu­nately you are one of his co-bloggers

AND

but be­cause you are at best a sev­enth rate thinker self-de­ceived into think­ing he’s a sec­ond rate thinker.

don’t you think that Robin must think EY is at least a sec­ond rate thinker, or else he wouldn’t let him­self be as­so­ci­ated with such a lowly sev­enth rate thinker...

i com­pletely un­der­stand if you don’t think EY is a worth­while guy to read, no prob there...but then why read Han­son also? if they are col­leagues and co-blog­gers there must be some­thing about EY that Robin thinks is first rate, no?

• then why read Han­son also? if they are col­leagues and co-blog­gers there must be some­thing about EY that Robin thinks is first rate, no?

Not nec­es­sar­ily. Han­son might be a good thinker who is also a per­sonal op­por­tunist who’ll do any­thing to en­hance his sta­tus, where co-pub­lish­ing with Yud­kowsky helped put Han­son’s blog on the map. Han­son could have “ad­mired” Yud­kowsky for his fan-club build­ing ca­pac­i­ties rather than for the high qual­ity of his think­ing.

• Re: Bayesi­anism and in­duc­tion.

Bayesi­anism is a for­mal­i­sa­tion of in­duc­tion. The philo­soph­i­cal prob­lems with the foun­da­tions of in­duc­tive rea­son­ing are equally prob­lems with the foun­da­tions of Bayesi­anism. Th­ese prob­lems are es­sen­tially un­changed since Hume’s era:

Rather than un­pro­duc­tive rad­i­cal skep­ti­cism about ev­ery­thing, Hume said that he was ac­tu­ally ad­vo­cat­ing a prac­ti­cal skep­ti­cism based on com­mon sense, wherein the in­evita­bil­ity of in­duc­tion is ac­cepted. Some­one who in­sists on rea­son for cer­tainty might, for in­stance, starve to death, as they would not in­fer the benefits of food based on pre­vi­ous ob­ser­va­tions of nu­tri­tion. - http://​​en.wikipe­dia.org/​​wiki/​​Prob­lem_of_induction
• There is a tra­di­tion of philos­o­phy with value.

Many fa­mous and mod­ern philoso­phers are dis­trac­tions from this. The same was true in the past. Each gen­er­a­tion, most philoso­phers did not carry on the im­por­tant, main­stream (in hind­sight) tra­di­tion.

If you can’t tell which is which, to me that sug­gests you could learn some­thing by study­ing philos­o­phy. Once you do un­der­stand what’s what, then you can read ex­clu­sively good philos­o­phy. For ex­am­ple, you’d know to ig­nore Wittgen­stein, as the fu­ture will do. But the worth­less­ness of some philoso­phers does not stop peo­ple like William God­win or Xeno­phanes from hav­ing valuable things to say (and the more re­cent philoso­phers who are car­ry­ing on their tra­di­tion).

• Cale­do­nian, aside from the con­ti­nen­tal school, could you please give some ex­am­ples of peo­ple try­ing to pos­ture to be profound? In philos­o­phy grad­u­ate pro­grams to­day, you are ex­plic­itly told not to pos­ture.

Also, could you give an ex­am­ple of a philo­soph­i­cal prob­lem that sci­ence has solved. E.g., What makes right ac­tions right? What makes a so­ciety just? What makes math­e­mat­i­cal claims true?

• Zubon: This also feels a bit like the “out­side view” Eliezer crit­i­cizes Robin for us­ing to make pre­dic­tions.

Prob­lem is not in us­ing out­side view, but in us­ing out­side view that doesn’t re­ally ap­ply to what it’s be­ing ap­plied to, in try­ing to in­fer prop­er­ties from sur­face similar­i­ties that don’t in­di­cate that ob­jects have similar causal struc­ture. If you are study­ing a sin­gle ob­ject, statis­tics of ar­bi­trar­ily sur­face level pro­vides valid ground for pre­dic­tions, if this sin­gle ob­ject doesn’t change its causal struc­ture while un­der study.

• You’re right that he should be able to en­gage stan­dard cri­tiques, Zubon, but if my (neg­ligible) ex­pe­rience with the philos­o­phy of free will is any in­di­ca­tion, many “stan­dard cri­tiques” are merely ex­er­cises in wooly think­ing. It’s rea­son­able for him to step back and say, “I don’t have time to deal with this sort of thing.”

• Daniel Den­nett is smart and usu­ally right—but I find his writ­ing style pretty yawn-in­duc­ing. I’m not very im­pressed by his de­tour into re­li­gion, ei­ther. Rather like Dawk­ins, it seems like he’s been dragged down into the gut­ter by the cre­ation­ists.

1. I’m cu­ri­ous to hear Nick Bostrom’s re­sponse to this.

2. Some­thing like modal logic is needed to au­to­mate solu­tions to things like this: Blue-eyed Monks Though you might be right about the pro­lifer­a­tion of modal log­ics.

3. You made some similar points here: Where Philos­o­phy Meets Science And Robin Han­son fol­lowed up here: On Philosophers

4. Both times it was pointed out that Paul Gra­ham has some similar com­plaints about philos­o­phy here: How to Do Philosophy

• What do you think of the philos­o­phy fac­ulty of MIT and Cal-Tech? I ask be­cause I sus­pect the fac­ulty there se­lects for philoso­phers that would be most usual to hard sci­en­tists and en­g­ineers (and for hard sci­ence and en­g­ineer­ing stu­dents).

http://​​www.mit.edu/​​~philos/​​fac­ulty.html

http://​​www.hss.caltech.edu/​​hu­man­i­ties/​​faculty

• I’ve made similar dis­mis­sals of philos­o­phy’s fruits at this blog and el­se­where. That was sup­posed to make me a nihilist, philis­tine psy­chopath. As I re­call, Eliezer did not agree with my anal­ogy to the­ol­ogy and as­trol­ogy.

• Some­times I en­joy these post­ings, some­times I am puz­zled. They of­ten are so self-refer­en­tial (links are mostly to older post­ings of the same au­thor) and rant­ing that I won­der whether I am be­ing had. I don’t doubt any­one’s good in­ten­tions. I am just doc­u­ment­ing my be­lief that Eliezer’s state is bi­nary: ei­ther the next Wittgen­stein or a world-class delu­sional crank.

• I agree on Pearl’s ac­com­plish­ment.

I have read Den­net, and he does a good job of ex­plain­ing what Con­scious­ness is and how it could arise out of non-con­scious parts. William Calvin was try­ing to do the same thing with how wet­ware (in the form that he knew it at the time) could do some­thing like think­ing. Jeff Hawk­ins had more de­tails of how the com­po­nents of the brain work and in­ter­act, and did a more thor­ough job of ex­plain­ing how the pieces must work to­gether and how thought could emerge from the in­ter­play. There is definitely ma­te­rial in “On In­tel­li­gence” that could help you think about how thought could arise out of purely phys­i­cal in­ter­ac­tions.

I’ll have to look into Drescher.

• The “bounce” is when you try to an­a­lyze a word like could, or a no­tion like pos­si­bil­ity, and end up say­ing, “The set of re­al­iz­able wor­lds [A’] that fol­lows from an ini­tial start­ing world A op­er­ated on by a set of phys­i­cal laws f.” Where re­al­iz­able con­tains the full mys­tery of “pos­si­ble”—but you’ve made it into a ba­sic sym­bol, and added some other sym­bols: the illu­sion of for­mal­ity.

Can you keep on “re­duc­ing”—un­pack­ing the mean­ings of terms—with­out hit­ting a bedrock? Is there any­one who doesn’t know what “can” and could” mean? Can you not co-define a set of words in terms of each other, co­her­en­tisi­cally, with­out pre­juice as to what is fun­da­men­tal?

• Tim,

Re: Bayesi­anism and in­duc­tion.

Given your con­ces­sion that Bayesi­anism is a for­mal­i­sa­tion of in­duc­tion, I don’t un­der­stand your origi­nal crit­i­cism that me say­ing in­duc­tivism ren­ders Bayesian ster­ile is like say­ing solip­sism ren­ders physics ster­ile.

Here’s a defi­ni­tion from David Deutsch’s “The Fabric of Real­ity:

Crypto-In­duc­tivist: Some­one who be­lieves that the in­val­idity of in­duc­tive rea­son­ing raises a se­ri­ous philo­soph­i­cal prob­lem, namely the prob­lem of how to jus­tify rely­ing on sci­en­tific the­o­ries.

Crypto-in­duc­tivists have an “in­duc­tion shaped” gap in their scheme of things.

Crit­i­cal ra­tio­nal­ism re­ally did solve the prob­lem of in­duc­tion: It has no “in­duc­tion shaped” gap.

I’m guess­ing from your Hume quote that you think it did so by re­sort­ing to rad­i­cal skep­ti­cism, but if you think this you are mis­taken.

• Tim, you wrote here that:

A perfectly ra­tio­nal agent who de­nies the val­idity of in­duc­tion would be to­tally unim­pressed by Bayesian ar­gu­ments.

Have you changed your mind? Do you now deny that Bayescraft re­lies on in­duc­tion?

• There are a num­ber of rea­sons why I feel that mod­ern philos­o­phy, even an­a­lytic philos­o­phy, has gone astray—so far astray that I sim­ply can’t make use of their years and years of ded­i­cated work.

Yes, much mod­ern philos­o­phy has gone astray. But some hasn’t. I would cite, for ex­am­ple, the think­ing of crit­i­cal ra­tio­nal­ists such as Karl Pop­per, William War­ren Bartley, David Deutsch, and David Miller.

More­over I main­tain that crit­i­cal ra­tio­nal­ism ought to be of use to you. First, it con­tains co­gent crit­i­cism of in­duc­tivism and crypto-in­duc­tivism and one who un­der­stands these crit­i­cisms should see why Bayescraft is ster­ile. This knowl­edge is not only use­ful, it can’t be ig­nored. Se­cond, crit­i­cal ra­tio­nal­ism, and not Bayescraft, is our best cur­rent the­ory of knowl­edge and how we come to know things. Best the­o­ries are use­ful not only in them­selves but also for the prob­lems they con­tain.

• What did you think of that part of EY’s bayes in­tro where he re­duces Falsifi­ca­tion­ism to a spe­cial case of Bayesi­anism?

• Philoso­phers are sci­en­tists, they’re just re­ally bad sci­en­tists for the most part. This is due to the fact that they draw their power from the cou­ple thou­sand years of mod­er­ately in­ter­est­ing mis­takes that we call “the his­tory of west­ern philos­o­phy”. What makes philoso­phers differ­ent from any other group of sci­en­tists, is sim­ply the tar­gets of in­quiry they spe­cial­ize (or try to spe­cial­ize) in. The same thing that makes a biol­o­gist differ­ent from a physi­cist. Some philoso­phers have done well, but they had to in­vent too much of the art for them­selves; not enough of their power came from the cu­mu­la­tive learn­ing of their pre­de­ces­sors be­ing passed ver­bally. Often the scrip­tures have done more to lead new stu­dents astray, than to lead them to vic­tory. This sort of stag­ger­ingly slow progress, tak­ing thou­sands of years, and rarely ever lead­ing to pro­fes­sional con­sen­sus, can be starkly con­trasted with the rapid progress of the rather young sci­ence of biol­ogy.

We are all Bayesian here, right? Let’s cut to the chase. Either philoso­phers will find pre­dic­tive hy­poth­e­sis spaces that make em­piri­cally testable pre­dic­tions and man­age to up­date their be­lief val­ues for those hy­pothe­ses with Bayesian ev­i­dence, or the field of philos­o­phy is, and always was, as doomed as the field of as­trol­ogy. Some philoso­phers do of course do this some­times, since some philoso­phers are some­times right.

The prob­lem philos­o­phy faces is that it hasn’t been able to re­li­ably teach its stu­dents how to do the bayes dance in philos­o­phy, the way biol­ogy has been able to teach its stu­dents to do the bayes dance in biol­ogy. What i sug­gest that we philoso­phers do, is take a good long look at top notch biol­ogy (or physics, or psy­chol­ogy, or math­e­mat­ics, or com­puter sci­ence, or as­tron­omy, or ge­ol­ogy, or eco­nomics, or any other sci­ence pro­gress­ing faster than wax melts) train­ing and philos­o­phy train­ing, and figure out what’s go­ing on in the biol­ogy train­ing com­mu­nity, that isn’t go­ing on in the philos­o­phy train­ing com­mu­nity. Then we try to bridge the gap.

Philos­o­phy is hard, but so is su­per sym­me­try, and for much the same rea­sons. If the bayes dance can han­dle the rest of sci­ence, I get the feel­ing it shouldn’t get stumped here. There are solv­able prob­lems of philos­o­phy, they are just re­ally hard, and re­ally hard sci­en­tific prob­lems, re­quire re­ally good sci­ence to get solved; not mod­er­ate sci­ence, or good enough sci­ence — re­ally good sci­ence. It is no won­der that philos­o­phy has steadily pro­gressed at the pace of a snail for the last 2000 years; its stu­dents have been given Plato in the ab­sence of Bayes.

• “What on Earth is evolu­tion, if not the keep­ing of DNA se­quences that worked last time?

It’s also repli­ca­tion and vari­a­tion.

It’s less effi­cient than hu­man in­duc­tion and stupi­der, be­cause it works only with DNA strings and is in­ca­pable of notic­ing sim­pler and more fun­da­men­tal gen­er­al­iza­tions like physics equa­tions. But of course it’s a crude form of in­duc­tive op­ti­miza­tion. What else would it be?

That seems like an ar­gu­ment from “failure of imag­i­na­tion”. Quite sim­ply, evolu­tion is trial and er­ror.

There are no knowl­edge-gen­er­at­ing pro­cesses with­out some equiv­a­lent of an in­duc­tive prior or an as­sump­tion of reg­u­lar­ity.

This is just ques­tion beg­ging, as I think you are aware. How did we come by the knowl­edge of in­duc­tion? Did we in­duce it? Im­pos­si­ble! So, there­fore, there must be at least one way to knowl­edge that doesn’t in­volve in­duc­tion.

This stuff is all old hat. Philoso­phers of the 20th cen­tury like Pop­per and Bartley re­al­ized that the whole in­duc­tion quag­mire is caused by peo­ple look­ing for jus­tified sources of knowl­edge. They con­cluded that jus­tifi­ca­tion­ism is a mis­take and re­placed it with crit­i­cal ra­tio­nal­ism. Now there are bad schol­ars who claim that crit­i­cal ra­tio­nal­ism sneaks in­duc­tion in through the back door. But that is just bad schol­ar­ship.

It’s a shame to still be wast­ing time on in­duc­tion in the 21st cen­tury. Rather than re­hash­ing old prob­lems, shouldn’t we be build­ing on what the best of 20th cen­tury philos­o­phy gave us?

The maths es­tab­lish­ing this of­ten go un­der the name of No-Free-Lunch the­o­rems.

Were the as­sump­tions of these the­o­rems in­duc­tively jus­tified?

• It’s true that con­tem­po­rary philos­o­phy is still very much ob­sessed with lan­guage de­spite at­tempts by prac­tion­ers to move on. Ob­ser­va­tion is talked about in terms of ob­ser­va­tion sen­tences. Science is taken to be a set of state­ments. Real­ism is taken to be the doc­trine that there are ob­jects to which our state­ments re­fer. Re­duc­tion­ism is the abil­ity to trans­late a sen­tence in one field into a sen­tence in an­other. The philos­o­phy of mind con­cerns it­self with find­ing a way to rec­on­cile the lack of sen­tence-like struc­tures in our brain with a per­verse de­sire for sen­tence-like struc­tures. But cog­ni­tive sci­ence is it­self a de­vel­op­ment of this odd way of think­ing about the world; sen­tences be­come al­gorithms and ev­ery­thing car­ries on the same. I don’t think you’re re­ally too far re­moved from this tra­di­tion.

• Talk­ing in terms of sen­tences is not reify­ing them; Cog­ni­tive sci­ence still uses sen­tences, which are not in­su­lated from in­ter­pre­ta­tional prob­lems.

• what is the ba­sis for the po­si­tion that knowl­edge of the world must come from an­a­lyt­i­cal/​prob­a­bil­is­tic mod­els? I’m not ques­tion­ing the “cor­rect­ness” of your view, only won­der­ing your ba­sis for it. It seems awfully con­ve­nient that a type of model that yields con­clu­sions is in fact the cor­rect one—put an­other way, why is the availa­bil­ity of a clear method­ol­ogy that gives you an­swers in­dica­tive of its uni­ver­sal ap­pli­ca­bil­ity in at­tain­ing knowl­edge?

tra­di­tional philos­o­phy, as you cor­rectly point out, has failed to bridge its the­ory to prac­tice—but per­haps that is the flaw of the users and not the the­ory. ra­tio­nal­ists gen­er­ally be­lieve the use of prob­a­bil­ities is sound method­ol­ogy, but the prob­lems re­gard­ing de­ci­sion-mak­ing are a flaw of the prac­ti­tion­ers. though I ap­pre­ci­ate you likely dis­agree, per­haps we have the same prob­lem with philos­o­phy. Though there are no clear an­swers, the mod­els of thought they provide could effec­tively ap­ply in prac­ti­cal situ­a­tions, its just that no philoso­pher has been able to get there.

• Tim,

Re: Crit­i­cal rationalism

Crit­i­cal ra­tio­nal­ism is similar to evolu­tion­ary adap­ta­tion (though there are some im­por­tant differ­ences). Do you think evolu­tion de­pends on in­duc­tion, or would you ad­mit that there are knowl­edge-gen­er­a­tion pro­cesses that do not re­quire in­duc­tion in any way, shape, or form?

• I read this blog for Han­son’s posts, but un­for­tu­nately you are one of his co-blog­gers. I wouldn’t be sur­prised if you delete this or fail to post it, but what­ever. Any­ways, I oc­ca­sion­ally read some­thing you write, and I am struck by how dis­mis­sive you are of con­tem­po­rary philos­o­phy, usu­ally treat­ing it as a straw­man or car­toon.

Can you please put your money where your mouth is and pub­lish a philo­soph­i­cal pa­per in a good jour­nal (such as Philo­soph­i­cal Re­view, Nous, Philos­o­phy and Phenomenolog­i­cal Re­search, Jour­nal of Philos­o­phy, Ethics, Mind, or Phil Stud­ies?) Lots of philoso­phers would love your ap­proach. (I think you will fail to pub­lish any­thing, not be­cause the dis­ci­pline is bi­ased against you, but be­cause you are at best a sev­enth rate thinker self-de­ceived into think­ing he’s a sec­ond rate thinker. I’m not say­ing that to be abu­sive, but, re­ally, to be frank.)

Once you do this, I will be­gin tak­ing you se­ri­ously. Un­til then, I con­sider you a very smart crank.

P.S., since you fre­quently write on top­ics other than your spe­cialty (the sin­gu­lar­ity), such as moral re­al­ism, re­duc­tion­ism, etc., please make your pub­li­ca­tion one of these top­ics.

• Funny… Have you ever fallen in love?

Albeit (I’ll bite)

How do you de­ter­mine all this metaphor­i­cal ex­am­ples with­out hav­ing ex­per­i­men­tal proof to back it up? Just be­cause you read does not give you the right to de­ter­mine how peo­ple feel...

Just an opinion Anna

• I think you’re an ass Eliezer. But you are fun. I’m an ass too. most philoso­pher’s are. You should con­sider that the an­a­lytic tra­di­tion is mostly an evolu­tion­ar­ily ex­tinct av­enue of philos­o­phy. I do think that you are ba­si­cally right how­ever. Cog­ni­tive sci­ence and AI is cer­tainly one of the most im­por­tant as­pects of meta­physics. But ul­ti­mately atom­i­cally mod­el­ing the en­tire cos­molog­i­cal evolu­tion is the real goal of meta­physics. Every­thing else is just nerds en­ter­tain­ing them­selves.

• If you think you’re so much bet­ter than philoso­phers, why don’t you pro­gram an AI to write pub­lish­able philos­o­phy pa­pers, hmm?

• And I thought you where in­tel­li­gent...well I guess not..maybe math­e­mat­i­cal..but you keep do­ing the same thing...but you will lose...sry..I thought you learned...good bye