Ok, response here from somebody who has studied philosophy. I disagree with a lot of what DSherron said, but on one point we agree—don’t get a philosophy degree. Take some electives, sure—that’ll give you an introduction to the field—but after that there’s absolutely no reason to pay for a philosophy degree. If you’re interested in it, you can learn just as much by reading in your spare time for FREE. I regret my philosophy degree.
So, now that that’s out of the way: philosophy isn’t useless. In fact, at its more useful end it blurs pretty seamlessly into mathematics). It’s also relevant to cognitive science), and in fact science in general. The only time philosophy is useless is when it isn’t being used to do anything. So, sure, pure philosophy is useless, but that’s like saying “pure rationality is useless”. We use rationality in combination with every other discipline, that’s the point of rationality.
As for the OP’s questions:
DSherron suggests following the method of the 13th century philosopher William of Ockham, but I don’t think that’s relevant to the question. As far as I can tell, ALL justificatory systems suffer from Munchausen’s Trilemma. Given that, Foundationalism and Coherentism seem to me to be pretty much equivalent. You wouldn’t pick incoherent axioms as your foundations, and conversely any coherent system of justifications should be decomposable into an orthogonal set of fundamental axioms and theorems derived thereof. Maybe there’s something I’m missing, though.
DSherron’s point is a good one. It was first formalised by the philosopher-mathematician Leibniz who proposed the principle of the Identity of Indiscernibles.
DSherron suggests that the LW sequence “A Human’s Guide to Words” is relevant here. Since that sequence is basically a huge discussion of the philosophy of language, and makes dozens of philosophical arguments aimed at correcting philosophical errors, I agree that it is a useful resource.
I’m doing a philosophy degree for two reasons. The first is that I enjoy philosophy (and a philosophy degree gives me plenty of opportunities to discuss it with others). The second is that Philosophy is my best prospect of getting the marks I need to get into a Law course. Both of these are fundamentally pragmatic.
1: Any Coherentist system could be remade as a Weak Foundationalist system, but the Weak Foundationalist would be asked why they give their starting axioms special priviledges (hence both sides of my discussion have dissed on them massively).
The Coherentists in the argument have gone to great pains to say that “consistency” and “coherence” are different things- their idea of coherence is complicated, but basically involves judging any belief by how well interconnected it is with other beliefs. The Foundationalists have said that although they ultimately resort to axioms, those axioms are self-evident axioms that any system must accept.
2: Could you clarify this point please? Superficially it seems contradictory (as it is a principle that cannot be demonstrated empirically itself), but I’m presumably missing something.
3: About the basic philosophy of language I agree. What I need here is empirical evidence to show that this applies specifically to the Contextualist v.s Invariantist question.
For 1) the answer is basically to figure out what bets you’re willing to make. You don’t know anything, for strong definitions of know. Absolutely nothing, not one single thing, and there is no possible way to prove anything without already knowing something. But here’s the catch; beliefs are probabilities. You can say “I don’t know that I’m not going to be burned at the stake for writing on Less Wrong” while also saying “but I probably won’t be”. You have to make a decision; choose your priors. You can pick ones at random, or you can pick ones that seem like they work to accomplish your real goals in the real world; I can’t technically fault you for priors, but then again justification to other humans isn’t really the point. I’m not sure how exactly Coherentists think they can arrive at any beliefs whatsoever without taking some arbitrary ones to start with, and I’m not sure how anyone thinks that any beliefs are “self-evident”. You can choose whatever priors you want, I guess, but if you choose any really weird ones let me know, because I’d like to make some bets with you… We live in a low-entropy universe; simple explanations exist. You can dispute how I know that, but if you truly believed any differently then you should be making bets left and right and winning against anyone who thought something silly like that a coin would stay 50⁄50 just because it usually does.
Basically, you can’t argue anything to an ideal philosopher of perfect emptiness, any more than you can argue anything to a rock. If you refuse to accept anything, then you can go do whatever you want (or perhaps you can’t, since you don’t know what you want), and I’ll get on with the whole living thing over here. You should read “The Simple Truth”; it’s a nice exploration of some of these ideas. You can’t justify knowledge, at all, and there’s no difference between claiming an arbitrary set of axioms and an arbitrary set of starting beliefs (they are literally the same thing), but you can still count sheep, if you really want to.
2) is mostly contained in 1), I think.
3) Why do you need empirical evidence? What could that possibly show you? I guess you could theoretically get a bunch of Contextualista and Invariantists together and show that most of them think that “know” has a fundamental meaning, but that’s only evidence that those people are silly. Words are not special. To draw from your lower comment to me, “a trout is a type of fish” is not fundamentally true, linguistically or otherwise. It is true when you, as an English speaker, say it in an English forum, read by English speakers. Is “Фольре є омдни з дівви риб” a linguistic truth? That’s (probably) the same sentence in a language picked at random off Google Translate. So, is it true? Answer before you continue reading.
Actually, I lied. That sentence is gibberish; I moved the letters around. A native speaker of that language would have told you it was clearly not true. But you had no idea whether it was or wasn’t; you don’t speak that language, and for that matter neither do I. I could have just written profanity for all I know. But the meanings are not fundamental to the little squiggles on your computer screen; they are in your mind. Words are just mental paintbrush handles, and with them we can draw pictures in each other’s minds, similar to those in our own. If you knew that I had had some kind of neurological malfunction such that I associated the word “trout” to a mental image of a moderately sized land-bound mammal, and I said “a trout is a type of fish”, you would know that I was wrong (and possibly confused about what fish were). If you told me “a trout is a type of fish”, without clarifying that your idea of trout was different from mine, you’d be lying. Words do not have meanings; they are simply convenient mental handles to paint broad pictures in each other’s minds. “Know” is exactly the same way. There is no true, really real more real than that other one meaning of “know”, just the broad pictures that the word can paint in minds. The only reason anyone argues over definitions is to sneak in underhanded connotations (or, potentially, to demand that they not be brought in). There is no argument. Whatever the Contextualists wants to mean by “know” can be called “to flozzlebait”, and whatever the Invariantists wants to mean by it can be called “to mankieinate”. There, now that they both understand each other, they can resolve their argument… If there ever even was one (which I doubt).
1: The Foundationalists have claimed probability is off the metaphorical table- the concept of probability rests either on subjective feeling (irrational) or on empirical evidence(circular, as our belief in empirical evidence rests on the assumption it is probable). They had problems with self-evident, but I created a new definition as “Must be true in any possible universe” (although I’m not sure of the truth of his conclusion, the way EliIizer describes a non-reductionist universe basically claims for reductionism this sort of self-evidency).
2: Doesn’t solve the problem I have with it.
3: Of the statement “A trout is a type of fish”, the simplification “This statement is true in English” is good enough to describe reality. The invariantist, and likely the contextualist, would claim that universally, across languages, humans have a concept of “knows”, however they describe it, which fits their philosophy.
You’re right, my statement was far too strong, and I hereby retract it. Instead, I claim that philosophy which is not firmly grounded in the real world such that it effectively becomes another discipline is worthless. A philosophy book is unlikely to contain very much of value, but a cognitive science book which touches on ideas from philosophy is more valuable than one which doesn’t. The problem is that most philosophy is just attempts to argue for things that sound nice, logically, with not a care for their actual value. Philosophy is not entirely worthless, since it forms the backbone of rationality, but the problem is the useful parts are almost all settled questions (and the ones that aren’t are effectively the grounds of science, not abstract discussion). We already know how to form beliefs that work in the real world, justified by the fact that they work in the real world.. We already know how to get to the most basic form of rationality from whence we can then use the tools recursively to improve them. We know how to integrate new science into our belief structure. The major thing which has traditionally been a philosophical question which we still don’t have an answer to, namely morality, is fundamentally reduced to an empirical question: what do humans in fact value? We already know that morality as we generally imagine it is a fundamentally a flawed concept, since there are no moral laws which bind us from the outside, but just the fact that we value some things that aren’t just us and our tribe. The field is effectively empty of useful open questions (the justification of priors is one of the few relevant ones remaining, but it’s also one which doesn’t help us in real life much).
Basically, whether philosophers dispute something is essentially un-correlated to whether there is a clear answer on it or not. If you want to know truth, don’t talk to a philosopher. If you pick your beliefs based on strength of human arguments, you’re going to believe whatever the most persuasive person believes, and there’s only weak evidence that that should correlate with truth. Sure, philosophy feeds into rationality and cog-sci and mathematics, but if you want to figure out which parts do so in a useful way, go study those fields. The problem with philosophy as a field is not the questions it asks but the way it answers them; there is no force that drives philosophers to accept correct arguments that they don’t like, so they all believe whatever they want to believe (and everyone says that’s ok). I mean, anti-reductionism? Epiphenomenalism? This stuff is maybe a little better than religious nonsense, but it still deserves to be laughed at, not taken as a serious opponent. My problem is not the fundamentals of the field, but the way it exists in the real world.
If you judge philosophy by what helps us in the empirical world, this is mostly correct. The importance of rationality to philosophy (granted the existence of an empirical world) I also agree with. However, some people want to know the true answers to these questions, useful or not. For that, argument is all we’ve got.
I would mostly agree with rationality training for philosophers, except in that there is something both circular and silly about using empirical data to influence, if indirectly, discussions on if the empirical world exists.
Super quick and dirty response: I believe it exists, you believe it exists, and everyone you’ve ever spoken to believes it exists. You have massive evidence that it exists in the form of memories which seem far more likely to come from it actually existing than any other possibility. Is there a chance we’re all wrong (or that you’re hallucinating the rest of us, etc.)? Of course. There always is. If someone demands proof that it exists, they will be disappointed—there is no such thing as irrefutable truth. Not even “a priori” logic—not only could you be mistaken, but additionally your thoughts are physical, empirical phenomena, so you can’t take their existence as granted while denying the physical world the same status.
If anyone really truly believes that the empirical world doesn’t exist, you haven’t heard from them. They might believe that they believed it, but to truly believe that it doesn’t exist, or even simply that we have no evidence either way and it’s therefore a tossup, they won’t bother arguing about it (it’s as likely to cause harm as good). They’ll pick their actions completely at random, and probably die because “eat” never came up on their list. If anyone truly thinks that the status of the physical world is questionable, as a serious position, I’d like to meet them. I’d also like to get them help, because they are clinically insane (that’s what we call people who can’t connect to reality on some level).
Basically, the whole discussion is moot. There is no reason for me to deny the existence of what I see, nor for you to do so, nor anyone else having the discussion. Reality exists, and that is true, whether or not you can argue a rock into believing it. I don’t care what rocks, or neutral judges, or anyone like that believes. I care about what I believe and what other humans and human-like things believe. That’s why philosophy in that manner is worthless—it’s all about argumentation, persuasion, and social rules, not about seeking truth.
Your argument is about as valid as “Take it on faith”. Unless appealing to pragmatism, your argument is circular in using the belief of others when you can’t justifiably assume their existence. Second, your argument is irrational in that it appeals to “Everybody believes X” to support X. Thirdly, a source claiming X to be so is only evidence for X being so if you have reason to consider the source reliable.
You are also mixing up “epistemic order” with “empirical order”, to frame two new concepts. “Epistemic order” represents orders of inference- if I infer A from B and B from C, then C is prior to B and B is prior to A in epistemic order regardless of the real-world relation of whatever they are. “Empirical order”, of course, represents what is the empirical cause of what (if indeed anything causes anything).
A person detects their own thoughts in a different way from the way they detect their own senses, so they are unrelated in epistemic order. You raise a valid point about assuming that one’s thoughts really are one’s thoughts, but unless resorting to the Memory Argument (which is part of the Evil Demon argument I discussed) they are at least avaliable as arguments to consider.
The Foundationalist skeptic is arguing that believing in the existence of the world IS IRRATIONAL. Without resorting to the arguments I describe in the first post, there seems to be no way to get around this. Pragmatics clearly isn’t one, after all.
Ok, response here from somebody who has studied philosophy. I disagree with a lot of what DSherron said, but on one point we agree—don’t get a philosophy degree. Take some electives, sure—that’ll give you an introduction to the field—but after that there’s absolutely no reason to pay for a philosophy degree. If you’re interested in it, you can learn just as much by reading in your spare time for FREE. I regret my philosophy degree.
So, now that that’s out of the way: philosophy isn’t useless. In fact, at its more useful end it blurs pretty seamlessly into mathematics). It’s also relevant to cognitive science), and in fact science in general. The only time philosophy is useless is when it isn’t being used to do anything. So, sure, pure philosophy is useless, but that’s like saying “pure rationality is useless”. We use rationality in combination with every other discipline, that’s the point of rationality.
As for the OP’s questions:
DSherron suggests following the method of the 13th century philosopher William of Ockham, but I don’t think that’s relevant to the question. As far as I can tell, ALL justificatory systems suffer from Munchausen’s Trilemma. Given that, Foundationalism and Coherentism seem to me to be pretty much equivalent. You wouldn’t pick incoherent axioms as your foundations, and conversely any coherent system of justifications should be decomposable into an orthogonal set of fundamental axioms and theorems derived thereof. Maybe there’s something I’m missing, though.
DSherron’s point is a good one. It was first formalised by the philosopher-mathematician Leibniz who proposed the principle of the Identity of Indiscernibles.
DSherron suggests that the LW sequence “A Human’s Guide to Words” is relevant here. Since that sequence is basically a huge discussion of the philosophy of language, and makes dozens of philosophical arguments aimed at correcting philosophical errors, I agree that it is a useful resource.
I’m doing a philosophy degree for two reasons. The first is that I enjoy philosophy (and a philosophy degree gives me plenty of opportunities to discuss it with others). The second is that Philosophy is my best prospect of getting the marks I need to get into a Law course. Both of these are fundamentally pragmatic.
1: Any Coherentist system could be remade as a Weak Foundationalist system, but the Weak Foundationalist would be asked why they give their starting axioms special priviledges (hence both sides of my discussion have dissed on them massively).
The Coherentists in the argument have gone to great pains to say that “consistency” and “coherence” are different things- their idea of coherence is complicated, but basically involves judging any belief by how well interconnected it is with other beliefs. The Foundationalists have said that although they ultimately resort to axioms, those axioms are self-evident axioms that any system must accept.
2: Could you clarify this point please? Superficially it seems contradictory (as it is a principle that cannot be demonstrated empirically itself), but I’m presumably missing something.
3: About the basic philosophy of language I agree. What I need here is empirical evidence to show that this applies specifically to the Contextualist v.s Invariantist question.
For 1) the answer is basically to figure out what bets you’re willing to make. You don’t know anything, for strong definitions of know. Absolutely nothing, not one single thing, and there is no possible way to prove anything without already knowing something. But here’s the catch; beliefs are probabilities. You can say “I don’t know that I’m not going to be burned at the stake for writing on Less Wrong” while also saying “but I probably won’t be”. You have to make a decision; choose your priors. You can pick ones at random, or you can pick ones that seem like they work to accomplish your real goals in the real world; I can’t technically fault you for priors, but then again justification to other humans isn’t really the point. I’m not sure how exactly Coherentists think they can arrive at any beliefs whatsoever without taking some arbitrary ones to start with, and I’m not sure how anyone thinks that any beliefs are “self-evident”. You can choose whatever priors you want, I guess, but if you choose any really weird ones let me know, because I’d like to make some bets with you… We live in a low-entropy universe; simple explanations exist. You can dispute how I know that, but if you truly believed any differently then you should be making bets left and right and winning against anyone who thought something silly like that a coin would stay 50⁄50 just because it usually does. Basically, you can’t argue anything to an ideal philosopher of perfect emptiness, any more than you can argue anything to a rock. If you refuse to accept anything, then you can go do whatever you want (or perhaps you can’t, since you don’t know what you want), and I’ll get on with the whole living thing over here. You should read “The Simple Truth”; it’s a nice exploration of some of these ideas. You can’t justify knowledge, at all, and there’s no difference between claiming an arbitrary set of axioms and an arbitrary set of starting beliefs (they are literally the same thing), but you can still count sheep, if you really want to. 2) is mostly contained in 1), I think.
3) Why do you need empirical evidence? What could that possibly show you? I guess you could theoretically get a bunch of Contextualista and Invariantists together and show that most of them think that “know” has a fundamental meaning, but that’s only evidence that those people are silly. Words are not special. To draw from your lower comment to me, “a trout is a type of fish” is not fundamentally true, linguistically or otherwise. It is true when you, as an English speaker, say it in an English forum, read by English speakers. Is “Фольре є омдни з дівви риб” a linguistic truth? That’s (probably) the same sentence in a language picked at random off Google Translate. So, is it true? Answer before you continue reading. Actually, I lied. That sentence is gibberish; I moved the letters around. A native speaker of that language would have told you it was clearly not true. But you had no idea whether it was or wasn’t; you don’t speak that language, and for that matter neither do I. I could have just written profanity for all I know. But the meanings are not fundamental to the little squiggles on your computer screen; they are in your mind. Words are just mental paintbrush handles, and with them we can draw pictures in each other’s minds, similar to those in our own. If you knew that I had had some kind of neurological malfunction such that I associated the word “trout” to a mental image of a moderately sized land-bound mammal, and I said “a trout is a type of fish”, you would know that I was wrong (and possibly confused about what fish were). If you told me “a trout is a type of fish”, without clarifying that your idea of trout was different from mine, you’d be lying. Words do not have meanings; they are simply convenient mental handles to paint broad pictures in each other’s minds. “Know” is exactly the same way. There is no true, really real more real than that other one meaning of “know”, just the broad pictures that the word can paint in minds. The only reason anyone argues over definitions is to sneak in underhanded connotations (or, potentially, to demand that they not be brought in). There is no argument. Whatever the Contextualists wants to mean by “know” can be called “to flozzlebait”, and whatever the Invariantists wants to mean by it can be called “to mankieinate”. There, now that they both understand each other, they can resolve their argument… If there ever even was one (which I doubt).
1: The Foundationalists have claimed probability is off the metaphorical table- the concept of probability rests either on subjective feeling (irrational) or on empirical evidence(circular, as our belief in empirical evidence rests on the assumption it is probable). They had problems with self-evident, but I created a new definition as “Must be true in any possible universe” (although I’m not sure of the truth of his conclusion, the way EliIizer describes a non-reductionist universe basically claims for reductionism this sort of self-evidency).
2: Doesn’t solve the problem I have with it.
3: Of the statement “A trout is a type of fish”, the simplification “This statement is true in English” is good enough to describe reality. The invariantist, and likely the contextualist, would claim that universally, across languages, humans have a concept of “knows”, however they describe it, which fits their philosophy.
You’re right, my statement was far too strong, and I hereby retract it. Instead, I claim that philosophy which is not firmly grounded in the real world such that it effectively becomes another discipline is worthless. A philosophy book is unlikely to contain very much of value, but a cognitive science book which touches on ideas from philosophy is more valuable than one which doesn’t. The problem is that most philosophy is just attempts to argue for things that sound nice, logically, with not a care for their actual value. Philosophy is not entirely worthless, since it forms the backbone of rationality, but the problem is the useful parts are almost all settled questions (and the ones that aren’t are effectively the grounds of science, not abstract discussion). We already know how to form beliefs that work in the real world, justified by the fact that they work in the real world.. We already know how to get to the most basic form of rationality from whence we can then use the tools recursively to improve them. We know how to integrate new science into our belief structure. The major thing which has traditionally been a philosophical question which we still don’t have an answer to, namely morality, is fundamentally reduced to an empirical question: what do humans in fact value? We already know that morality as we generally imagine it is a fundamentally a flawed concept, since there are no moral laws which bind us from the outside, but just the fact that we value some things that aren’t just us and our tribe. The field is effectively empty of useful open questions (the justification of priors is one of the few relevant ones remaining, but it’s also one which doesn’t help us in real life much).
Basically, whether philosophers dispute something is essentially un-correlated to whether there is a clear answer on it or not. If you want to know truth, don’t talk to a philosopher. If you pick your beliefs based on strength of human arguments, you’re going to believe whatever the most persuasive person believes, and there’s only weak evidence that that should correlate with truth. Sure, philosophy feeds into rationality and cog-sci and mathematics, but if you want to figure out which parts do so in a useful way, go study those fields. The problem with philosophy as a field is not the questions it asks but the way it answers them; there is no force that drives philosophers to accept correct arguments that they don’t like, so they all believe whatever they want to believe (and everyone says that’s ok). I mean, anti-reductionism? Epiphenomenalism? This stuff is maybe a little better than religious nonsense, but it still deserves to be laughed at, not taken as a serious opponent. My problem is not the fundamentals of the field, but the way it exists in the real world.
If you judge philosophy by what helps us in the empirical world, this is mostly correct. The importance of rationality to philosophy (granted the existence of an empirical world) I also agree with. However, some people want to know the true answers to these questions, useful or not. For that, argument is all we’ve got.
I would mostly agree with rationality training for philosophers, except in that there is something both circular and silly about using empirical data to influence, if indirectly, discussions on if the empirical world exists.
Super quick and dirty response: I believe it exists, you believe it exists, and everyone you’ve ever spoken to believes it exists. You have massive evidence that it exists in the form of memories which seem far more likely to come from it actually existing than any other possibility. Is there a chance we’re all wrong (or that you’re hallucinating the rest of us, etc.)? Of course. There always is. If someone demands proof that it exists, they will be disappointed—there is no such thing as irrefutable truth. Not even “a priori” logic—not only could you be mistaken, but additionally your thoughts are physical, empirical phenomena, so you can’t take their existence as granted while denying the physical world the same status.
If anyone really truly believes that the empirical world doesn’t exist, you haven’t heard from them. They might believe that they believed it, but to truly believe that it doesn’t exist, or even simply that we have no evidence either way and it’s therefore a tossup, they won’t bother arguing about it (it’s as likely to cause harm as good). They’ll pick their actions completely at random, and probably die because “eat” never came up on their list. If anyone truly thinks that the status of the physical world is questionable, as a serious position, I’d like to meet them. I’d also like to get them help, because they are clinically insane (that’s what we call people who can’t connect to reality on some level).
Basically, the whole discussion is moot. There is no reason for me to deny the existence of what I see, nor for you to do so, nor anyone else having the discussion. Reality exists, and that is true, whether or not you can argue a rock into believing it. I don’t care what rocks, or neutral judges, or anyone like that believes. I care about what I believe and what other humans and human-like things believe. That’s why philosophy in that manner is worthless—it’s all about argumentation, persuasion, and social rules, not about seeking truth.
Your argument is about as valid as “Take it on faith”. Unless appealing to pragmatism, your argument is circular in using the belief of others when you can’t justifiably assume their existence. Second, your argument is irrational in that it appeals to “Everybody believes X” to support X. Thirdly, a source claiming X to be so is only evidence for X being so if you have reason to consider the source reliable.
You are also mixing up “epistemic order” with “empirical order”, to frame two new concepts. “Epistemic order” represents orders of inference- if I infer A from B and B from C, then C is prior to B and B is prior to A in epistemic order regardless of the real-world relation of whatever they are. “Empirical order”, of course, represents what is the empirical cause of what (if indeed anything causes anything).
A person detects their own thoughts in a different way from the way they detect their own senses, so they are unrelated in epistemic order. You raise a valid point about assuming that one’s thoughts really are one’s thoughts, but unless resorting to the Memory Argument (which is part of the Evil Demon argument I discussed) they are at least avaliable as arguments to consider.
The Foundationalist skeptic is arguing that believing in the existence of the world IS IRRATIONAL. Without resorting to the arguments I describe in the first post, there seems to be no way to get around this. Pragmatics clearly isn’t one, after all.