Philosophy: A Diseased Discipline

Part of the se­quence: Ra­tion­al­ity and Philosophy

Eliezer’s anti-philos­o­phy post Against Mo­dal Log­ics was pretty con­tro­ver­sial, while my re­cent pro-philos­o­phy (by LW stan­dards) post and my list of use­ful main­stream philos­o­phy con­tri­bu­tions were mas­sively up-voted. This sug­gests a sig­nifi­cant ap­pre­ci­a­tion for main­stream philos­o­phy on Less Wrong—not sur­pris­ing, since Less Wrong cov­ers so many philo­soph­i­cal top­ics.

If you fol­lowed the re­cent very long de­bate be­tween Eliezer and I over the value of main­stream philos­o­phy, you may have got­ten the im­pres­sion that Eliezer and I strongly di­verge on the sub­ject. But I sus­pect I agree more with Eliezer on the value of main­stream philos­o­phy than I do with many Less Wrong read­ers—per­haps most.

That might sound odd com­ing from some­one who writes a philos­o­phy blog and spends most of his spare time do­ing philos­o­phy, so let me ex­plain my­self. (Warn­ing: broad gen­er­al­iza­tions ahead! There are ex­cep­tions.)

Failed methods

Large swaths of philos­o­phy (e.g. con­ti­nen­tal and post­mod­ern philos­o­phy) of­ten don’t even try to be clear, rigor­ous, or sci­en­tifi­cally re­spectable. This is philos­o­phy of the “Un­cle Joe’s mus­ings on the mean­ing of life” sort, ex­cept that it’s dressed up in big words and long foot­notes. You will oc­ca­sion­ally stum­ble upon an ar­gu­ment, but it falls prey to mag­i­cal cat­e­gories and lan­guage con­fu­sions and non-nat­u­ral hy­pothe­ses. You may also stum­ble upon sci­ence or math, but they are used to ‘prove’ things ir­rele­vant to the ac­tual sci­en­tific data or the equa­tions used.

An­a­lytic philos­o­phy is clearer, more rigor­ous, and bet­ter with math and sci­ence, but only does a slightly bet­ter job of avoid­ing mag­i­cal cat­e­gories, lan­guage con­fu­sions, and non-nat­u­ral hy­pothe­ses. More­over, its cen­tral tool is in­tu­ition, and this dis­plays a near-to­tal ig­no­rance of how brains work. As Michael Vas­sar ob­serves, philoso­phers are “spec­tac­u­larly bad” at un­der­stand­ing that their in­tu­itions are gen­er­ated by cog­ni­tive al­gorithms.

A dis­eased discipline

What about Quinean nat­u­ral­ists? Many of them at least un­der­stand the ba­sics: that things are made of atoms, that many ques­tions don’t need to be an­swered but in­stead dis­solved, that the brain is not an a pri­ori truth fac­tory, that in­tu­itions come from cog­ni­tive al­gorithms, that hu­mans are loaded with bias, that lan­guage is full of tricks, and that jus­tifi­ca­tion rests in the lens that can see its flaws. Some of them are even Bayesi­ans.

Like I said, a few nat­u­ral­is­tic philoso­phers are do­ing some use­ful work. But the sig­nal-to-noise ra­tio is much lower even in nat­u­ral­is­tic philos­o­phy than it is in, say, be­hav­ioral eco­nomics or cog­ni­tive neu­ro­science or ar­tifi­cial in­tel­li­gence or statis­tics. Why? Here are some hy­pothe­ses, based on my thou­sands of hours in the liter­a­ture:

  1. Many philoso­phers have been in­fected (of­ten by later Wittgen­stein) with the idea that philos­o­phy is sup­posed to be use­less. If it’s use­ful, then it’s sci­ence or math or some­thing else, but not philos­o­phy. Michael Bishop says a com­mon com­plaint from his col­leagues about his 2004 book is that it is too use­ful.

  2. Most philoso­phers don’t un­der­stand the ba­sics, so nat­u­ral­ists spend much of their time com­ing up with new ways to ar­gue that peo­ple are made of atoms and in­tu­itions don’t trump sci­ence. They fight beside the poor athe­is­tic philoso­phers who keep com­ing up with new ways to ar­gue that the uni­verse was not cre­ated by some­one’s in­visi­ble mag­i­cal friend.

  3. Philos­o­phy has grown into an ab­nor­mally back­ward-look­ing dis­ci­pline. Scien­tists like to put their work in the con­text of what old dead guys said, too, but philoso­phers have a real fetish for it. Even nat­u­ral­ists spend a fair amount of time re-in­ter­pret­ing Hume and Dewey yet again.

  4. Be­cause they were trained in tra­di­tional philo­soph­i­cal ideas, ar­gu­ments, and frames of mind, nat­u­ral­ists will an­chor and ad­just from tra­di­tional philos­o­phy when they make progress, rather than scrap­ping the whole mess and start­ing from scratch with a cor­rect un­der­stand­ing of lan­guage, physics, and cog­ni­tive sci­ence. Some­times, philo­soph­i­cal work is use­ful to build from: Judea Pearl’s triumphant work on causal­ity built on ear­lier coun­ter­fac­tual ac­counts of causal­ity from philos­o­phy. Other times, it’s best to ig­nore the past con­fu­sions. Eliezer made most of his philo­soph­i­cal progress on his own, in or­der to solve prob­lems in AI, and only later looked around in philos­o­phy to see which stan­dard po­si­tion his own the­ory was most similar to.

  5. Many nat­u­ral­ists aren’t trained in cog­ni­tive sci­ence or AI. Cog­ni­tive sci­ence is es­sen­tial be­cause the tool we use to philoso­phize is the brain, and if you don’t know how your tool works then you’ll use it poorly. AI is use­ful be­cause it keeps you hon­est: you can’t write con­fused con­cepts or non-nat­u­ral hy­pothe­ses in a pro­gram­ming lan­guage.

  6. Main­stream philos­o­phy pub­lish­ing fa­vors the es­tab­lished po­si­tions and ar­gu­ments. You’re more likely to get pub­lished if you can write about how in­tu­itions are use­less in solv­ing Get­tier prob­lems (which is a con­fused set of non-prob­lems any­way) than if you write about how to make a su­per­in­tel­li­gent ma­chine pre­serve its util­ity func­tion across mil­lions of self-mod­ifi­ca­tions.

  7. Even much of the use­ful work nat­u­ral­is­tic philoso­phers do is not at the cut­ting-edge. Chalmers’ up­date for I.J. Good’s ‘in­tel­li­gence ex­plo­sion’ ar­gu­ment is the best one-stop sum­mary available, but it doesn’t get as far as the Han­son-Yud­kowsky AI-Foom de­bate in 2008 did. Talbot (2009) and Bishop & Trout (2004) provide handy sum­maries of much of the heuris­tics and bi­ases liter­a­ture, just like Eliezer has so use­fully done on Less Wrong, but of course this isn’t cut­ting edge. You could always just read it in the pri­mary liter­a­ture by Kah­ne­man and Tver­sky and oth­ers.

Of course, there is main­stream philos­o­phy that is both good and cut­ting-edge: the work of Nick Bostrom and Daniel Den­nett stands out. And of course there is a role for those who keep ar­gu­ing for athe­ism and re­duc­tion­ism and so on. I was a fun­da­men­tal­ist Chris­tian un­til I read some con­tem­po­rary athe­is­tic philos­o­phy, so that kind of work definitely does some good.

But if you’re look­ing to solve cut­ting-edge prob­lems, main­stream philos­o­phy is one of the last places you should look. Try to find the an­swer in the cog­ni­tive sci­ence or AI liter­a­ture first, or try to solve the prob­lem by ap­ply­ing ra­tio­nal­ist think­ing: like this.

Swim­ming the murky wa­ters of main­stream philos­o­phy is per­haps a job best left for those who already spent sev­eral years study­ing it—that is, peo­ple like me. I already know what things are called and where to look, and I have an effi­cient filter for skip­ping past the 95% of philos­o­phy that isn’t use­ful to me. And hope­fully my ra­tio­nal­ist train­ing will pro­tect me from pick­ing up bad habits of thought.

Philos­o­phy: the way forward

Un­for­tu­nately, many im­por­tant prob­lems are fun­da­men­tally philo­soph­i­cal prob­lems. Philos­o­phy it­self is un­avoid­able. How can we pro­ceed?

First, we must re­main vigilant with our ra­tio­nal­ity train­ing. It is not easy to over­come mil­lions of years of brain evolu­tion, and as long as you are hu­man there is no fi­nal vic­tory. You will always wake up the next morn­ing as homo sapi­ens.

Se­cond, if you want to con­tribute to cut­ting-edge prob­lems, even ones that seem philo­soph­i­cal, it’s far more pro­duc­tive to study math and sci­ence than it is to study philos­o­phy. You’ll learn more in math and sci­ence, and your learn­ing will be of a higher qual­ity. Ask a fel­low ra­tio­nal­ist who is knowl­edge­able about philos­o­phy what the stan­dard po­si­tions and ar­gu­ments in philos­o­phy are on your topic. If any of them seem re­ally use­ful, grab those par­tic­u­lar works and read them. But again: you’re prob­a­bly bet­ter off try­ing to solve the prob­lem by think­ing like a cog­ni­tive sci­en­tist or an AI pro­gram­mer than by in­gest­ing main­stream philos­o­phy.

How­ever, I must say that I wish so much of Eliezer’s cut­ting-edge work wasn’t spread out across hun­dreds of Less Wrong blog posts and long SIAI ar­ti­cles writ­ten in with an idiosyn­cratic style and vo­cab­u­lary. I would rather these ideas were writ­ten in stan­dard aca­demic form, even if they tran­scended the stan­dard game of main­stream philos­o­phy.

But it’s one thing to com­plain; an­other to offer solu­tions. So let me tell you what I think cut­ting-edge philos­o­phy should be. As you might ex­pect, my vi­sion is to com­bine what’s good in LW-style philos­o­phy with what’s good in main­stream philos­o­phy, and toss out the rest:

  1. Write short ar­ti­cles. One or two ma­jor ideas or ar­gu­ments per ar­ti­cle, max­i­mum. Try to keep each ar­ti­cle un­der 20 pages. It’s hard to fol­low a hun­dred-page ar­gu­ment.

  2. Open each ar­ti­cle by ex­plain­ing the con­text and goals of the ar­ti­cle (even if you cover mostly the same ground in the open­ing of 5 other ar­ti­cles). What topic are you dis­cussing? Which prob­lem do you want to solve? What have other peo­ple said about the prob­lem? What will you ac­com­plish in the pa­per? In­tro­duce key terms, cite stan­dard sources and po­si­tions on the prob­lem you’ll be dis­cussing, even if you dis­agree with them.

  3. If pos­si­ble, use the stan­dard terms in the field. If the stan­dard terms are flawed, ex­plain why they are flawed and then in­tro­duce your new terms in that con­text so ev­ery­body knows what you’re talk­ing about. This re­quires that you re­search your topic so you know what the stan­dard terms and po­si­tions are. If you’re talk­ing about a prob­lem in cog­ni­tive sci­ence, you’ll need to read cog­ni­tive sci­ence liter­a­ture. If you’re talk­ing about a prob­lem in so­cial sci­ence, you’ll need to read so­cial sci­ence liter­a­ture. If you’re talk­ing about a prob­lem in episte­mol­ogy or moral­ity, you’ll need to read philos­o­phy.

  4. Write as clearly and sim­ply as pos­si­ble. Or­ga­nize the pa­per with lots of head­ing and sub­head­ings. Put in lots of ‘hand-hold­ing’ sen­tences to help your reader along: ex­plain the point of the pre­vi­ous sec­tion, then ex­plain why the next sec­tion is nec­es­sary, etc. Pa­tiently guide your reader through ev­ery step of the ar­gu­ment, es­pe­cially if it is long and com­pli­cated.

  5. Always cite the rele­vant liter­a­ture. If you can’t find much work rele­vant to your topic, you al­most cer­tainly haven’t looked hard enough. Cit­ing the rele­vant liter­a­ture not only lends weight to your ar­gu­ment, but also en­ables the reader to track down and ex­am­ine the ideas or claims you are dis­cussing. Be­ing lazy with your cita­tions is a sure way to frus­trate pre­cisely those read­ers who care enough to read your pa­per closely.

  6. Think like a cog­ni­tive sci­en­tist and AI pro­gram­mer. Watch out for bi­ases. Avoid mag­i­cal cat­e­gories and lan­guage con­fu­sions and non-nat­u­ral hy­pothe­ses. Look at your in­tu­itions from the out­side, as cog­ni­tive al­gorithms. Up­date your be­liefs in re­sponse to ev­i­dence. [This one is cen­tral. This is LW-style philos­o­phy.]

  7. Use your ra­tio­nal­ity train­ing, but avoid lan­guage that is unique to Less Wrong. Nearly all these terms and ideas have stan­dard names out­side of Less Wrong (though in many cases Less Wrong already uses the stan­dard lan­guage).

  8. Don’t dwell too long on what old dead guys said, nor on se­man­tic de­bates. Dis­solve se­man­tic prob­lems and move on.

  9. Con­clude with a sum­mary of your pa­per, and sug­gest di­rec­tions for fu­ture re­search.

  10. Ask fel­low ra­tio­nal­ists to read drafts of your ar­ti­cle, then re-write. Then rewrite again, adding more cita­tions and hand-hold­ing sen­tences.

  11. For­mat the ar­ti­cle at­trac­tively. A well-cho­sen font makes for an eas­ier read. Then pub­lish (in a jour­nal or el­se­where).

Note that this is not just my vi­sion of how to get pub­lished in jour­nals. It’s my vi­sion of how to do philos­o­phy.

Meet­ing jour­nals stan­dards is not the most im­por­tant rea­son to fol­low the sug­ges­tions above. Write short ar­ti­cles be­cause they’re eas­ier to fol­low. Open with the con­text and goals of your ar­ti­cle be­cause that makes it eas­ier to un­der­stand, and lets peo­ple de­cide right away whether your ar­ti­cle fits their in­ter­ests. Use stan­dard terms so that peo­ple already fa­mil­iar with the topic aren’t an­noyed at hav­ing to learn a whole new vo­cab­u­lary just to read your pa­per. Cite the rele­vant po­si­tions and ar­gu­ments so that peo­ple have a sense of the con­text of what you’re do­ing, and can look up what other peo­ple have said on the topic. Write clearly and sim­ply and with much or­ga­ni­za­tion so that your pa­per is not weary­ing to read. Write lots of hand-hold­ing sen­tences be­cause we always com­mu­ni­cate less effec­tively then we thought we did. Cite the rele­vant liter­a­ture as much as pos­si­ble to as­sist your most care­ful read­ers in get­ting the in­for­ma­tion they want to know. Use your ra­tio­nal­ity train­ing to re­main sharp at all times. And so on.

That is what cut­ting-edge philos­o­phy could look like, I think.

Next post: How You Make Judgments

Pre­vi­ous post: Less Wrong Ra­tion­al­ity and Main­stream Philosophy