Quine’s naturalized epistemology. Epistemology is a branch of cognitive science
Saying this may count as staking an exciting position in philosophy, already right there; but merely saying this doesn’t shape my expectations about how people think, or tell me how to build an AI, or how to expect or do anything concrete that I couldn’t do before, so from an LW perspective this isn’t yet a move on the gameboard. At best it introduces a move on the gameboard.
Tarski on language and truth.
I know Tarski as a mathematician and have acknowledged my debt to him as a mathematician. Perhaps you can learn about him in philosophy, but that doesn’t imply people should study philosophy if they will also run into Tarski by doing mathematics.
Chalmers’ formalization of Good’s intelligence explosion argument...
...was great for introducing mainstream academia to Good, but if you compare it to http://wiki.lesswrong.com/wiki/The_Hanson-Yudkowsky_AI-Foom_Debate then you’ll see that most of the issues raised didn’t fit into Chalmers’s decomposition at all. Not suggesting that he should’ve done it differently in a first paper, but still, Chalmers’s formalization doesn’t yet represent most of the debates that have been done in this community. It’s more an illustration of how far you have to simplify things down for the sake of getting published in the mainstream, than an argument that you ought to be learning this sort of thing from the mainstream.
Dennett on belief in belief.
Acknowledged and credited. Like Drescher, Dennett is one of the known exceptions.
Bratman on intention. Bratman’s 1987 book on intention has been a major inspiration to AI researchers working on belief-desire-intention models of intelligent behavior...
Appears as a citation only in AIMA 2nd edition, described as a philosopher who approves of GOFAI. “Not all philosophers are critical of GOFAI, however; some are, in fact, ardent advocates and even practitioners… Michael Bratman has applied
his “belief-desire-intention” model of human psychology (Bratman, 1987) to AI research on planning (Bratman, 1992).” This is the only mention in the 2nd edition. Perhaps by the time they wrote the third edition they read more Bratman and figured that he could be used to describe work they had already done? Not exactly a “major inspiration”, if so...
Functionalism and multiple realizability.
This comes under the heading of “things that rather a lot of computer programmers, though not all of them, can see as immediately obvious even if philosophers argue it afterward”. I really don’t think that computer programmers would be at a loss to understand that different systems can implement the same algorithm if not for Putnam and Lewis.
Explaining the cognitive processes that generate our intuitions… Talbot describes the project of his philosophy dissertation for USC this way: ”...where psychological research indicates that certain intuitions are likely to be inaccurate, or that whole categories of intuitions are not good evidence, this will overall benefit philosophy.”...
Same comment as for Quine: This might introduce interesting work, but while saying just this may count as an exciting philosophical position, it’s not a move on the LW gameboard until you get to specifics. Then it’s not a very impressive move unless it involves doing nonobvious reductionism, not just “Bias X might make philosophers want to believe in position Y”. You are not being held to a special standard as Luke here; a friend named Kip Werking once did some work arguing that we have lots of cognitive biases pushing us to believe in libertarian free will that I thought made a nice illustration of the difference between LW-style decomposition of a cognitive algorithm and treating biases as an argument in the war of surface intuitions.
Pearl on causality.
Mathematician and AI researcher. He may have mentioned the philosophical literature in his book. It’s what academics do. He may even have read the philosophers before he worked out the answer for himself. He may even have found that reading philosophers getting it wrong helped spur him to think about the problem and deduce the right answer by contrast—I’ve done some of that over the course of my career, though more in the early phases than the later phases. Can you really describe Pearl’s work as “building” on philosophy, when IIRC, most of the philosophers were claiming at this point that causality was a mere illusion of correlation? Has Pearl named a previous philosopher, who was not a mathematician, who Pearl thought was getting it right?
Drescher’s Good and Real.
Previously named by me as good philosophy, as done by an AI researcher coming in from outside for some odd reason. Not exactly a good sign for philosophy when you think about it.
Dennett’s “intentional stance.”
For a change I actually did read about this before forming my own AI theories. I can’t recall ever actually using it, though. It’s for helping people who are confused in a way that I wasn’t confused to begin with. Dennett is in any case a widely known and named exception.
Bostrom on anthropic reasoning. And global catastrophic risks. And Pascal’s mugging. And the doomsday argument. And the simulation argument.
A friend and colleague who was part of the transhumanist community and a founder of the World Transhumanist Association long before he was the Director of the Oxford Future of Humanity Institute, and who’s done a great deal to precisionize transhumanist ideas about global catastrophic risks and inform academia about them, as well as excellent original work on anthropic reasoning and the simulation argument. Bostrom is familiar with Less Wrong and has even tried to bring some of the work done here into mainstream academia, such as Pascal’s Mugging, which was invented right here on Less Wrong by none other than yours truly—although of course, owing to the constraints of academia and their prior unfamiliarity with elementary probability theory and decision theory, Bostrom was unable to convey the most exciting part of Pascal’s Mugging in his academic writeup, namely the idea that Solomonoff-induction-style reasoning will explode the size of remote possibilities much faster than their Kolmogorov complexity diminishes their probability.
Reading Bostrom is a triumph of the rule “Read the most famous transhumanists” not “Read the most famous philosophers”.
The doomsday argument, which was not invented by Bostrom, is a rare case of genuinely interesting work done in mainstream philosophy—anthropic issues are genuinely not obvious, genuinely worth arguing about and philosophers have done genuinely interesting work on it. Similarly, although LW has gotten further, there has been genuinely interesting work in philosophy on the genuinely interesting problems of Newcomblike dilemmas. There are people in the field who can do good work on the rather rare occasions when there is something worth arguing about that is still classed as “philosophy” rather than as a separate science, although they cannot actually solve those problems (as very clearly illustrated by the Newcomblike case) and the field as a whole is not capable of distinguishing good work from bad work on even the genuinely interesting subjects.
Ord on risks with low probabilities and high stakes.
Argued it on Less Wrong before he wrote the mainstream paper. The LW discussion got further, IMO. (And AFAIK, since I don’t know if there was any academic debate or if the paper just dropped into the void.)
Deontic logic
Is not useful for anything in real life / AI. This is instantly obvious to any sufficiently competent AI researcher. See e.g. http://norvig.com/design-patterns/img070.htm, a mention that turned up in passing back when I was doing my own search for prior work on Friendly AI.
...I’ll stop there, but do want to note, even if it’s out-of-order, that the work you glowingly cite on statistical prediction rules is familiar to me from having read the famous edited volume “Judgment Under Uncertainty: Heuristics and Biases” where it appears as a lovely chapter by Robyn Dawes on “The robust beauty of improper linear models”, which quite stuck in my mind (citation from memory). You may have learned about this from philosophy, and I can see how you would credit that as a use of reading philosophy, but it’s not work done in philosophy and, well, I didn’t learn about it there so this particular citation feels a bit odd to me.
when IIRC, most of the philosophers were claiming at this point that causality was a mere illusion of correlation?
That this isn’t at all the case should be obvious even if the only thing you’ve read on the subject is Pearl’s book. The entire counterfactual approach is due to Lewis and Stalnaker. Salmon’s theory isn’t about correlation either. Also, see James Woodward who has done very similar work to Pearl but from a philosophy department. Pearl cites all of them if I recall.
Tarski: But I thought you said you were not only influenced by Tarski’s mathematics but also his philosophical work on truth?
Chalmers’ paper: Yeah, it’s mostly useful as an overview. I should have clarified that I meant that Chalmers’ paper makes a more organized and compelling case for Good’s intelligence explosion than anybody at SIAI has in one place. Obviously, your work (and your debate with Robin) goes far beyond Chalmers’ introductory paper, but it’s scattered all over the place and takes a lot of reading to track down and understand.
And this would be the main reason to learn something from the mainstream: If it takes way less time than tracking down the same arguments and answers through hundreds of Less Wrong posts and other articles, and does a better job of pointing you to other discussions of the relevant ideas.
Talbot: I guess I’ll have to read more about what you mean by dissolution to cognitive algorithm. I thought the point was that even if you can solve the problem, there’s that lingering wonder about why people believe in free will, and once you explain why it is that humans believe in free will, not even a hint of the problem remains. The difference being that your dissolution of free will to cognitive algorithm didn’t (as I recall) cite any of the relevant science, whereas Talbot’s (and others’) dissolutions to cognitive algorithms do cite the relevant science.
Is there somewhere where you explain the difference between what Talbot, and also Kip Werking, have done versus what you think is so special and important about LW-style philosophy?
As for the others: Yeah, we seem to agree that useful work does sometimes come from philosophy, but that it mostly doesn’t, and people are better off reading statistics and AI and cognitive science, like I said. So I’m not sure there’s anything left to argue.
The one major thing I’d like clarification on (if you can find the time) is the difference between what experimental philosophers are doing (or what Joshua Greene is doing) and the dissolution-to-algorithm that you consider so central to LW-style philosophy.
As for the others: Yeah, we seem to agree that useful work does sometimes come from philosophy, but that it mostly doesn’t, and people are better off reading statistics and AI and cognitive science, like I said. So I’m not sure there’s anything left to argue.
I’d like to emphasize, to no one in particular, that the evaluation that seems to be going on here is about whether or not reading these philosophers is useful for building a Friendly recursively self-improving artificial intelligence. While thats a good criteria for whether or not Eliezer should read them, failure to meet this criteria doesn’t render the work of the philosopher valueless (really! it doesn’t!). The question “is philosophy helpful for researching AI” is not the same as the question “is philosophy helpful for a rational person trying to better understand the world”.
Tarski did philosophical work on truth? Apart from his mathematical logic work on truth? Haven’t read it if so.
What does Talbot say about a cognitive algorithm generating the appearance of free will? Is it one of the cognitive algorithms referenced in the LW dissolution or a different one? Does Talbot talk about labeling possibilities as reachable? About causal models with separate nodes for self and physics? Can you please take a moment to be specific about this?
Tarski did philosophical work on truth? Apart from his mathematical logic work on truth?
Okay, now you’re just drawing lines around what you don’t like and calling everything in that box philosophy.
Should we just hold a draft? With the first pick the philosophers select… Judea Pearl! What? whats that? The mathematicians have just grabbed Alfred Tarski from right under the noses the of the philosophers!
To philosophers, Tarski’s work on truth is considered one of the triumphs of 20th century philosophy. But that sort of thing is typical of analytic and especially naturalistic philosophy (including your own philosophy): the lines between mathematics and science and philosophy are pretty fuzzy.
Talbot’s paper isn’t about free will (though others in experimental philosophy are); it’s about the cognitive mechanisms that produce intuitions in general. But anyway this is the post I’m drafting right now, so I’ll be happy to pick up the conversation once I’ve posted it. I might do a post on experimental philosophy and free will, too.
To philosophers, Tarski’s work on truth is considered one of the triumphs of 20th century philosophy.
Yet to Wikipedia, Tarski is a mathematician. Period. Philosophy is not mentioned.
It is true that mathematical logic can be considered as a joint construction by philosophers and mathematicians. Frege, Russell, and Godel are all listed in Wikipedia as both mathematicians and philosophers. So are a couple of modern contributors to logic—Dana Scott and Per Martin-Lof. But just about everyone else who made major contributions to mathematical logic—Peano, Cantor, Hilbert, Zermelo, Skolem, von Neumann, Gentzen, Church, Turing, Komolgorov, Kleene, Robinson, Curry, Cohen, Lawvere, and Girard are listed as mathematicians, not philosophers. To my knowledge, the only pure philosopher who has made a contribution to logic at the level of these people is Kripke, and I’m not sure that should count (because the bulk of his contribution was done before he got to college and picked philosophy as a major. :)
Quine, incidentally, made a minor contribution to mathematical logic with his idea of ‘stratified’ formulas in his ‘New Foundations’ version of set theory. Unfortunately, Quine’s theory was found to be inconsistent. But a few decades later, a fix was discovered and today some of the most interesting Computer Science work on higher-order logic uses a variant of Quine’s idea to avoid Girard’s paradox.
Yet to Wikipedia, Tarski is a mathematician. Period. Philosophy is not mentioned.
This sort of thing is less a fact about the world and more an artifact of the epistemological bias in English Wikipedia’s wording and application of its verifiability rules. en:wp’s way of thinking started at computer technology—as far as I can tell, the first field in which Wikipedia was the most useful encyclopedia—and went in concentric circles out from there (comp sci, maths, physics, the other sciences); work in the humanities less than a hundred or so years old gets screwed over regularly. This is because the verifiability rules have to more or less compress a degree’s worth of training in sifting through human-generated evidence into a few quickly-comprehensible paragraphs, which are then overly misapplied by teenage science geek rulebots who have an “ugh” reaction to fuzzy subjects.
This is admittedly a bit of an overgeneralisation, but this sort of thing is actually a serious problem with Wikipedia’s coverage of the humanities. (Which I’m currently researching with the assistance of upset academics in the area in order to make a suitable amount of targeted fuss about.)
tl;dr: that’s stronger evidence of how Wikipedia works than of how the world works.
I believe Carnap is also primarily listed as a philosopher in wikipiedia, and he certainly counts as a major contributor to modern logic (although, of course, much of his work relates to mathamatics as well).
Unfortunately, Quine’s theory was found to be inconsistent.
Quine’s set theory NF has not been shown to be inconsistent. Neither has it been proven consistent, even relative to large cardinals. This is actually a famous open problem (by the standards of set theory...)
The set theory of the 1940 first edition of Quine’s Mathematical Logic married NF to the proper classes of NBG set theory, and included an axiom schema of unrestricted comprehension for proper classes. In 1942, J. Barkley Rosser proved that Quine’s set theory was subject to the Burali-Forti paradox. Rosser’s proof does not go through for NF(U). In 1950, Hao Wang showed how to amend Quine’s axioms so as to avoid this problem, and Quine included the resulting axiomatization in the 1951 second and final edition of Mathematical Logic.
So I was wrong—the fix came only one decade later.
Quine, incidentally, made a minor contribution to mathematical logic with his idea of ‘stratified’ formulas in his ‘New Foundations’ version of set theory.
To any of the scientists and mathematics I know personal and have discussed this with, the lines between science and philosophy and mathematics and philosophy are not fuzzy at all. Mostly I have only heard of philosophers talked about the line being fuzzy or that philosophy encompasses mathematics and science. The philosophers that I have seen do this seem to do it because they disere the prestige that comes along with science and math’s success at changing the world.
Is experimental philosophy considered philosophy or science? Is formal epistemology considered philosophy or mathematics? Was Tarski doing math or philosophy? Is Stephen Hawking’s latest book philosophy or science? You can draw sharp lines if you want, but the world itself isn’t cut that way.
I missed this reply for some reason until I noticed it today.
My comment concerned what I have observed and not my personal belief and I tried to word it as such. Such as: To any of the scientists and mathematicians “I know personal.”(I am not going to repeat my spelling mistake), Mostly I have only heard of philosophers …
I do not evaluate whole disciplines at once. I do evaluate individual projects or experimental set ups. For this reason and that I was sharing what I considered an interesting observation not my personal belief, I do not think answering your questions will forward the conversation significantly.
To me the line between science and non-science is clear or can be made clear with further understanding. If society wants to draw a venn diagram where there is overlap between science and philosophy it is just one more case of non-orthogonal terminology. While non-orthogonal terminology is inefficient it is not he worst of society’s problems and should not be focused on unduly. I do think the line between science and non-science should be as sharp as possible and making it fuzzy is a bad thing for society/humanity.
I know Tarski as a mathematician and have acknowledged my debt to him as a mathematician.
As I pointed out before, the same is true for me of Quine. I don’t know if lukeprog means to include Mathematical Logic when he keeps saying not to read Quine, but that book was effectively my introduction to the subject, and I still hold it in high regard. It’s an elegant system with some important innovations, and features a particularly nice treatment of Gödel’s incompleteness theorem (one of his main objectives in writing the book). I don’t know if it’s the best book on mathematical logic there is (I doubt it), but it appeals to a certain kind of personality, and I would certainly recommend it to a young high-schooler over reading Principia Mathematica, for example.
Saying this may count as staking an exciting position in philosophy, already right there; but merely saying this doesn’t shape my expectations about how people think, or tell me how to build an AI, or how to expect or do anything concrete that I couldn’t do before, so from an LW perspective this isn’t yet a move on the gameboard. At best it introduces a move on the gameboard.
I know Tarski as a mathematician and have acknowledged my debt to him as a mathematician. Perhaps you can learn about him in philosophy, but that doesn’t imply people should study philosophy if they will also run into Tarski by doing mathematics.
...was great for introducing mainstream academia to Good, but if you compare it to http://wiki.lesswrong.com/wiki/The_Hanson-Yudkowsky_AI-Foom_Debate then you’ll see that most of the issues raised didn’t fit into Chalmers’s decomposition at all. Not suggesting that he should’ve done it differently in a first paper, but still, Chalmers’s formalization doesn’t yet represent most of the debates that have been done in this community. It’s more an illustration of how far you have to simplify things down for the sake of getting published in the mainstream, than an argument that you ought to be learning this sort of thing from the mainstream.
Acknowledged and credited. Like Drescher, Dennett is one of the known exceptions.
Appears as a citation only in AIMA 2nd edition, described as a philosopher who approves of GOFAI. “Not all philosophers are critical of GOFAI, however; some are, in fact, ardent advocates and even practitioners… Michael Bratman has applied his “belief-desire-intention” model of human psychology (Bratman, 1987) to AI research on planning (Bratman, 1992).” This is the only mention in the 2nd edition. Perhaps by the time they wrote the third edition they read more Bratman and figured that he could be used to describe work they had already done? Not exactly a “major inspiration”, if so...
This comes under the heading of “things that rather a lot of computer programmers, though not all of them, can see as immediately obvious even if philosophers argue it afterward”. I really don’t think that computer programmers would be at a loss to understand that different systems can implement the same algorithm if not for Putnam and Lewis.
Same comment as for Quine: This might introduce interesting work, but while saying just this may count as an exciting philosophical position, it’s not a move on the LW gameboard until you get to specifics. Then it’s not a very impressive move unless it involves doing nonobvious reductionism, not just “Bias X might make philosophers want to believe in position Y”. You are not being held to a special standard as Luke here; a friend named Kip Werking once did some work arguing that we have lots of cognitive biases pushing us to believe in libertarian free will that I thought made a nice illustration of the difference between LW-style decomposition of a cognitive algorithm and treating biases as an argument in the war of surface intuitions.
Mathematician and AI researcher. He may have mentioned the philosophical literature in his book. It’s what academics do. He may even have read the philosophers before he worked out the answer for himself. He may even have found that reading philosophers getting it wrong helped spur him to think about the problem and deduce the right answer by contrast—I’ve done some of that over the course of my career, though more in the early phases than the later phases. Can you really describe Pearl’s work as “building” on philosophy, when IIRC, most of the philosophers were claiming at this point that causality was a mere illusion of correlation? Has Pearl named a previous philosopher, who was not a mathematician, who Pearl thought was getting it right?
Previously named by me as good philosophy, as done by an AI researcher coming in from outside for some odd reason. Not exactly a good sign for philosophy when you think about it.
For a change I actually did read about this before forming my own AI theories. I can’t recall ever actually using it, though. It’s for helping people who are confused in a way that I wasn’t confused to begin with. Dennett is in any case a widely known and named exception.
A friend and colleague who was part of the transhumanist community and a founder of the World Transhumanist Association long before he was the Director of the Oxford Future of Humanity Institute, and who’s done a great deal to precisionize transhumanist ideas about global catastrophic risks and inform academia about them, as well as excellent original work on anthropic reasoning and the simulation argument. Bostrom is familiar with Less Wrong and has even tried to bring some of the work done here into mainstream academia, such as Pascal’s Mugging, which was invented right here on Less Wrong by none other than yours truly—although of course, owing to the constraints of academia and their prior unfamiliarity with elementary probability theory and decision theory, Bostrom was unable to convey the most exciting part of Pascal’s Mugging in his academic writeup, namely the idea that Solomonoff-induction-style reasoning will explode the size of remote possibilities much faster than their Kolmogorov complexity diminishes their probability.
Reading Bostrom is a triumph of the rule “Read the most famous transhumanists” not “Read the most famous philosophers”.
The doomsday argument, which was not invented by Bostrom, is a rare case of genuinely interesting work done in mainstream philosophy—anthropic issues are genuinely not obvious, genuinely worth arguing about and philosophers have done genuinely interesting work on it. Similarly, although LW has gotten further, there has been genuinely interesting work in philosophy on the genuinely interesting problems of Newcomblike dilemmas. There are people in the field who can do good work on the rather rare occasions when there is something worth arguing about that is still classed as “philosophy” rather than as a separate science, although they cannot actually solve those problems (as very clearly illustrated by the Newcomblike case) and the field as a whole is not capable of distinguishing good work from bad work on even the genuinely interesting subjects.
Argued it on Less Wrong before he wrote the mainstream paper. The LW discussion got further, IMO. (And AFAIK, since I don’t know if there was any academic debate or if the paper just dropped into the void.)
Is not useful for anything in real life / AI. This is instantly obvious to any sufficiently competent AI researcher. See e.g. http://norvig.com/design-patterns/img070.htm, a mention that turned up in passing back when I was doing my own search for prior work on Friendly AI.
...I’ll stop there, but do want to note, even if it’s out-of-order, that the work you glowingly cite on statistical prediction rules is familiar to me from having read the famous edited volume “Judgment Under Uncertainty: Heuristics and Biases” where it appears as a lovely chapter by Robyn Dawes on “The robust beauty of improper linear models”, which quite stuck in my mind (citation from memory). You may have learned about this from philosophy, and I can see how you would credit that as a use of reading philosophy, but it’s not work done in philosophy and, well, I didn’t learn about it there so this particular citation feels a bit odd to me.
That this isn’t at all the case should be obvious even if the only thing you’ve read on the subject is Pearl’s book. The entire counterfactual approach is due to Lewis and Stalnaker. Salmon’s theory isn’t about correlation either. Also, see James Woodward who has done very similar work to Pearl but from a philosophy department. Pearl cites all of them if I recall.
Stalnaker’s name sounds familiar from Pearl, so I’ll take your word for this and concede the point.
Cool. Let me know when you’ve finished your comment here and I’ll respond.
Done.
Quine’s naturalized epistemology: agreed.
Tarski: But I thought you said you were not only influenced by Tarski’s mathematics but also his philosophical work on truth?
Chalmers’ paper: Yeah, it’s mostly useful as an overview. I should have clarified that I meant that Chalmers’ paper makes a more organized and compelling case for Good’s intelligence explosion than anybody at SIAI has in one place. Obviously, your work (and your debate with Robin) goes far beyond Chalmers’ introductory paper, but it’s scattered all over the place and takes a lot of reading to track down and understand.
And this would be the main reason to learn something from the mainstream: If it takes way less time than tracking down the same arguments and answers through hundreds of Less Wrong posts and other articles, and does a better job of pointing you to other discussions of the relevant ideas.
But we could have the best of both worlds if SIAI spent some time writing well-referenced survey articles on their work, in the professional style instead of telling people to read hundreds of pages of blog posts (that mostly lack references) in order to figure out what you’re talking about.
Bratman: I don’t know his influence first hand, either—it’s just that I’ve seen his 1987 book mentioned in several books on AI and cognitive science.
Pearl: Jack beat me to the punch on this.
Talbot: I guess I’ll have to read more about what you mean by dissolution to cognitive algorithm. I thought the point was that even if you can solve the problem, there’s that lingering wonder about why people believe in free will, and once you explain why it is that humans believe in free will, not even a hint of the problem remains. The difference being that your dissolution of free will to cognitive algorithm didn’t (as I recall) cite any of the relevant science, whereas Talbot’s (and others’) dissolutions to cognitive algorithms do cite the relevant science.
Is there somewhere where you explain the difference between what Talbot, and also Kip Werking, have done versus what you think is so special and important about LW-style philosophy?
As for the others: Yeah, we seem to agree that useful work does sometimes come from philosophy, but that it mostly doesn’t, and people are better off reading statistics and AI and cognitive science, like I said. So I’m not sure there’s anything left to argue.
The one major thing I’d like clarification on (if you can find the time) is the difference between what experimental philosophers are doing (or what Joshua Greene is doing) and the dissolution-to-algorithm that you consider so central to LW-style philosophy.
I’d like to emphasize, to no one in particular, that the evaluation that seems to be going on here is about whether or not reading these philosophers is useful for building a Friendly recursively self-improving artificial intelligence. While thats a good criteria for whether or not Eliezer should read them, failure to meet this criteria doesn’t render the work of the philosopher valueless (really! it doesn’t!). The question “is philosophy helpful for researching AI” is not the same as the question “is philosophy helpful for a rational person trying to better understand the world”.
Tarski did philosophical work on truth? Apart from his mathematical logic work on truth? Haven’t read it if so.
What does Talbot say about a cognitive algorithm generating the appearance of free will? Is it one of the cognitive algorithms referenced in the LW dissolution or a different one? Does Talbot talk about labeling possibilities as reachable? About causal models with separate nodes for self and physics? Can you please take a moment to be specific about this?
Okay, now you’re just drawing lines around what you don’t like and calling everything in that box philosophy.
Should we just hold a draft? With the first pick the philosophers select… Judea Pearl! What? whats that? The mathematicians have just grabbed Alfred Tarski from right under the noses the of the philosophers!
To philosophers, Tarski’s work on truth is considered one of the triumphs of 20th century philosophy. But that sort of thing is typical of analytic and especially naturalistic philosophy (including your own philosophy): the lines between mathematics and science and philosophy are pretty fuzzy.
Talbot’s paper isn’t about free will (though others in experimental philosophy are); it’s about the cognitive mechanisms that produce intuitions in general. But anyway this is the post I’m drafting right now, so I’ll be happy to pick up the conversation once I’ve posted it. I might do a post on experimental philosophy and free will, too.
Yet to Wikipedia, Tarski is a mathematician. Period. Philosophy is not mentioned.
It is true that mathematical logic can be considered as a joint construction by philosophers and mathematicians. Frege, Russell, and Godel are all listed in Wikipedia as both mathematicians and philosophers. So are a couple of modern contributors to logic—Dana Scott and Per Martin-Lof. But just about everyone else who made major contributions to mathematical logic—Peano, Cantor, Hilbert, Zermelo, Skolem, von Neumann, Gentzen, Church, Turing, Komolgorov, Kleene, Robinson, Curry, Cohen, Lawvere, and Girard are listed as mathematicians, not philosophers. To my knowledge, the only pure philosopher who has made a contribution to logic at the level of these people is Kripke, and I’m not sure that should count (because the bulk of his contribution was done before he got to college and picked philosophy as a major. :)
Quine, incidentally, made a minor contribution to mathematical logic with his idea of ‘stratified’ formulas in his ‘New Foundations’ version of set theory. Unfortunately, Quine’s theory was found to be inconsistent. But a few decades later, a fix was discovered and today some of the most interesting Computer Science work on higher-order logic uses a variant of Quine’s idea to avoid Girard’s paradox.
This sort of thing is less a fact about the world and more an artifact of the epistemological bias in English Wikipedia’s wording and application of its verifiability rules. en:wp’s way of thinking started at computer technology—as far as I can tell, the first field in which Wikipedia was the most useful encyclopedia—and went in concentric circles out from there (comp sci, maths, physics, the other sciences); work in the humanities less than a hundred or so years old gets screwed over regularly. This is because the verifiability rules have to more or less compress a degree’s worth of training in sifting through human-generated evidence into a few quickly-comprehensible paragraphs, which are then overly misapplied by teenage science geek rulebots who have an “ugh” reaction to fuzzy subjects.
This is admittedly a bit of an overgeneralisation, but this sort of thing is actually a serious problem with Wikipedia’s coverage of the humanities. (Which I’m currently researching with the assistance of upset academics in the area in order to make a suitable amount of targeted fuss about.)
tl;dr: that’s stronger evidence of how Wikipedia works than of how the world works.
Wikipedia is not authoritative (and recognizes this explicitly—hence the need to give citations). Here is a quote from Tarski himself:
That sounds like a good way to describe the LW ideal as well.
I believe Carnap is also primarily listed as a philosopher in wikipiedia, and he certainly counts as a major contributor to modern logic (although, of course, much of his work relates to mathamatics as well).
Quine’s set theory NF has not been shown to be inconsistent. Neither has it been proven consistent, even relative to large cardinals. This is actually a famous open problem (by the standards of set theory...)
However, NFU (New Foundations with Urelements) is consistent relative to ZF.
Quoting Wikipedia
So I was wrong—the fix came only one decade later.
Oh, that’s where the name is familiar from...
To any of the scientists and mathematics I know personal and have discussed this with, the lines between science and philosophy and mathematics and philosophy are not fuzzy at all. Mostly I have only heard of philosophers talked about the line being fuzzy or that philosophy encompasses mathematics and science. The philosophers that I have seen do this seem to do it because they disere the prestige that comes along with science and math’s success at changing the world.
Is experimental philosophy considered philosophy or science? Is formal epistemology considered philosophy or mathematics? Was Tarski doing math or philosophy? Is Stephen Hawking’s latest book philosophy or science? You can draw sharp lines if you want, but the world itself isn’t cut that way.
I missed this reply for some reason until I noticed it today.
My comment concerned what I have observed and not my personal belief and I tried to word it as such. Such as: To any of the scientists and mathematicians “I know personal.”(I am not going to repeat my spelling mistake), Mostly I have only heard of philosophers …
I do not evaluate whole disciplines at once. I do evaluate individual projects or experimental set ups. For this reason and that I was sharing what I considered an interesting observation not my personal belief, I do not think answering your questions will forward the conversation significantly.
To me the line between science and non-science is clear or can be made clear with further understanding. If society wants to draw a venn diagram where there is overlap between science and philosophy it is just one more case of non-orthogonal terminology. While non-orthogonal terminology is inefficient it is not he worst of society’s problems and should not be focused on unduly. I do think the line between science and non-science should be as sharp as possible and making it fuzzy is a bad thing for society/humanity.
As I pointed out before, the same is true for me of Quine. I don’t know if lukeprog means to include Mathematical Logic when he keeps saying not to read Quine, but that book was effectively my introduction to the subject, and I still hold it in high regard. It’s an elegant system with some important innovations, and features a particularly nice treatment of Gödel’s incompleteness theorem (one of his main objectives in writing the book). I don’t know if it’s the best book on mathematical logic there is (I doubt it), but it appeals to a certain kind of personality, and I would certainly recommend it to a young high-schooler over reading Principia Mathematica, for example.