Warning: I am not a philosophy student and haven’t the slightest clue what any of your terms mean. That said, I can still answer your questions.
1) Occam’s Razor to the rescue! If you distribute your priors according to complexity and update on evidence using Bayes’ Theorum, then you’re entirely done. There’s nothing else you can do. Sure, if you’re unlucky then you’ll get very wrong beliefs, but what are the odds of a demon messing with your observations? Pretty low, compared to the much simpler explanation that what you think you see correlates well to the world around you. One and zero are not probabilities; you are never certain of anything, even those things you’re probably getting used to calling a priori truths. Learn to abandon your intuitions about certainty; even if you could be certain of something, our default intuitions will lead us to make bad bets when certainty is involved, so there’s nothing there worth holding on to. In any case, the right answer is understanding that beliefs are always always always uncertain. I’m pretty sure that 2 + 2 = 4, but I could be convinced otherwise by an overwhelming mountain of evidence.
2) I don’t know what question is being asked here, but if it has no possible impact on the real world then you can’t decide if it’s true or false. Look at Bayes’ Theorem; if probability (evidence given statement) is equal to probability (evidence) then your final belief is the same as your prior. If there is in principle no excitement you could run which would give you evidence for or against it, then the question is not really a question; knowing it was true or false would tell you nothing about which possible world you live in; it would not let you update your map. It is not merely useless but fundamentally not in the same class of statements as things like “are apples yellow?” or “should machines have legal rights, given “should” referring to generalized human preferences?” If there is an experiment you could run in principle, and knowing whether the statement is true or false would tell you something, then you simply have to refer to Occam’s Razor to find your prior. You won’t necessarily get an answer that’s firmly one way or another, but you might.
3) I’ll admit I had to look this up to give an answer. What I found was that there is literally not a question here. Go read A Human’s Guide to Words (sequence on LW) to understand why, although I’ll give a brief explanation. “Knowledge”, the word, is not a fundamental thing. Nowhere is there inscribed on the Almighty Rock of Knowledge that “knowledge” means “justified true belief” or “correctly assigned >90% certainty” or “things the Flying Spaghetti Monster told you.” It only has meaning as a symbol that we humans can use to communicate. If I made it clear that I was going to use the phrase “know x” to mean “ate x for breakfast”, and then said “I know a chicken biscuit”, I would be commiting an error; but that error would have nothing to do with the true meaning of “know”. When I say “I know that the earth is not flat”, I mean that I have seen pretty strong evidence that the earth really isn’t flat, such that for it to be flat would require a severe mental break on my part or other similarly unlikely circumstances. I don’t know it with certainty; I don’t know anything with certainty. But that’s not what “know” means in the minds of most people I speak with, so I can say “I know the world is not flat” and everyone around me gets the right idea. There is no such thing as a correct attribution of knowledge, nor an incorrect one, because knowledge is not a fundamental thing nor sharply defined, but instead it’s a fuzzy shape in conceptspace which corresponds to some human intuitions about the world but not to the actual territory. Humans are biased towards concrete true/false dichotomies, but that’s not how the real world works. Once you realize that beliefs are probabilities you’ll realize how incredibly silly most philosophical discussions of knowledge are.
My quick advice to you in general (so that you can solve future problems like this on your own) is three-fold. First, learn Bayes and keep it close to you at all times. The Twelve Virtues of Rationality are nice for a way to remind yourself what it means to want to actually get the right answer. Second, read A Human’s Guide to Words, and in particular play Rationalist Taboo constantly. Play it with yourself before you speak and with others when they use words like “knowledge” or “free will”. Do not simply accept a vague intuition; play it until you’re certain of what you mean (and it matches what you meant when you first said it), or certain that you have no idea. Pro tip: free will sounds like a pretty simple concept, but you have no idea how to specify it other than that thing that you can feel you have. (And any other specification fails to capture what you or anybody else really want to talk about). Third, and I’m sure some people will disagree here, but… Get the heck out of philosophy. There is almost nothing of value that you’ll get from the field. Almost all of it is trash, because there really aren’t enough interesting questions that don’t require you to actually go out and do /gasp/ science to justify an entire field. Pretty much all the important ones have answers already, although you wouldn’t know that by talking to philosophers. Philosophy was worthwhile in Ancient Greece when “philosopher” meant “aspiring rationalist” and human knowledge was at the stage of gods controlling everything, but in the modern day we already have the basic rationalist’s toolkit available for mass consumption. Any serious advance made in the Art will come from needing it to do something that the Art you were taught couldn’t do for you, and such advances are what philosophy should be, but isn’t, providing. You won’t find need of a new rationalist Art if you’re trying to convince other people, who by definition do not already have this new Art, of some position that you stumbled upon because of other people who argued it convincingly to you. If you care about the human state of knowledge, go into any scientific discipline. Otherwise just pick literally anything else. There’s nothing for you in philosophy except for a whole lot of confused words.
Ok, response here from somebody who has studied philosophy. I disagree with a lot of what DSherron said, but on one point we agree—don’t get a philosophy degree. Take some electives, sure—that’ll give you an introduction to the field—but after that there’s absolutely no reason to pay for a philosophy degree. If you’re interested in it, you can learn just as much by reading in your spare time for FREE. I regret my philosophy degree.
So, now that that’s out of the way: philosophy isn’t useless. In fact, at its more useful end it blurs pretty seamlessly into mathematics). It’s also relevant to cognitive science), and in fact science in general. The only time philosophy is useless is when it isn’t being used to do anything. So, sure, pure philosophy is useless, but that’s like saying “pure rationality is useless”. We use rationality in combination with every other discipline, that’s the point of rationality.
As for the OP’s questions:
DSherron suggests following the method of the 13th century philosopher William of Ockham, but I don’t think that’s relevant to the question. As far as I can tell, ALL justificatory systems suffer from Munchausen’s Trilemma. Given that, Foundationalism and Coherentism seem to me to be pretty much equivalent. You wouldn’t pick incoherent axioms as your foundations, and conversely any coherent system of justifications should be decomposable into an orthogonal set of fundamental axioms and theorems derived thereof. Maybe there’s something I’m missing, though.
DSherron’s point is a good one. It was first formalised by the philosopher-mathematician Leibniz who proposed the principle of the Identity of Indiscernibles.
DSherron suggests that the LW sequence “A Human’s Guide to Words” is relevant here. Since that sequence is basically a huge discussion of the philosophy of language, and makes dozens of philosophical arguments aimed at correcting philosophical errors, I agree that it is a useful resource.
I’m doing a philosophy degree for two reasons. The first is that I enjoy philosophy (and a philosophy degree gives me plenty of opportunities to discuss it with others). The second is that Philosophy is my best prospect of getting the marks I need to get into a Law course. Both of these are fundamentally pragmatic.
1: Any Coherentist system could be remade as a Weak Foundationalist system, but the Weak Foundationalist would be asked why they give their starting axioms special priviledges (hence both sides of my discussion have dissed on them massively).
The Coherentists in the argument have gone to great pains to say that “consistency” and “coherence” are different things- their idea of coherence is complicated, but basically involves judging any belief by how well interconnected it is with other beliefs. The Foundationalists have said that although they ultimately resort to axioms, those axioms are self-evident axioms that any system must accept.
2: Could you clarify this point please? Superficially it seems contradictory (as it is a principle that cannot be demonstrated empirically itself), but I’m presumably missing something.
3: About the basic philosophy of language I agree. What I need here is empirical evidence to show that this applies specifically to the Contextualist v.s Invariantist question.
For 1) the answer is basically to figure out what bets you’re willing to make. You don’t know anything, for strong definitions of know. Absolutely nothing, not one single thing, and there is no possible way to prove anything without already knowing something. But here’s the catch; beliefs are probabilities. You can say “I don’t know that I’m not going to be burned at the stake for writing on Less Wrong” while also saying “but I probably won’t be”. You have to make a decision; choose your priors. You can pick ones at random, or you can pick ones that seem like they work to accomplish your real goals in the real world; I can’t technically fault you for priors, but then again justification to other humans isn’t really the point. I’m not sure how exactly Coherentists think they can arrive at any beliefs whatsoever without taking some arbitrary ones to start with, and I’m not sure how anyone thinks that any beliefs are “self-evident”. You can choose whatever priors you want, I guess, but if you choose any really weird ones let me know, because I’d like to make some bets with you… We live in a low-entropy universe; simple explanations exist. You can dispute how I know that, but if you truly believed any differently then you should be making bets left and right and winning against anyone who thought something silly like that a coin would stay 50⁄50 just because it usually does.
Basically, you can’t argue anything to an ideal philosopher of perfect emptiness, any more than you can argue anything to a rock. If you refuse to accept anything, then you can go do whatever you want (or perhaps you can’t, since you don’t know what you want), and I’ll get on with the whole living thing over here. You should read “The Simple Truth”; it’s a nice exploration of some of these ideas. You can’t justify knowledge, at all, and there’s no difference between claiming an arbitrary set of axioms and an arbitrary set of starting beliefs (they are literally the same thing), but you can still count sheep, if you really want to.
2) is mostly contained in 1), I think.
3) Why do you need empirical evidence? What could that possibly show you? I guess you could theoretically get a bunch of Contextualista and Invariantists together and show that most of them think that “know” has a fundamental meaning, but that’s only evidence that those people are silly. Words are not special. To draw from your lower comment to me, “a trout is a type of fish” is not fundamentally true, linguistically or otherwise. It is true when you, as an English speaker, say it in an English forum, read by English speakers. Is “Фольре є омдни з дівви риб” a linguistic truth? That’s (probably) the same sentence in a language picked at random off Google Translate. So, is it true? Answer before you continue reading.
Actually, I lied. That sentence is gibberish; I moved the letters around. A native speaker of that language would have told you it was clearly not true. But you had no idea whether it was or wasn’t; you don’t speak that language, and for that matter neither do I. I could have just written profanity for all I know. But the meanings are not fundamental to the little squiggles on your computer screen; they are in your mind. Words are just mental paintbrush handles, and with them we can draw pictures in each other’s minds, similar to those in our own. If you knew that I had had some kind of neurological malfunction such that I associated the word “trout” to a mental image of a moderately sized land-bound mammal, and I said “a trout is a type of fish”, you would know that I was wrong (and possibly confused about what fish were). If you told me “a trout is a type of fish”, without clarifying that your idea of trout was different from mine, you’d be lying. Words do not have meanings; they are simply convenient mental handles to paint broad pictures in each other’s minds. “Know” is exactly the same way. There is no true, really real more real than that other one meaning of “know”, just the broad pictures that the word can paint in minds. The only reason anyone argues over definitions is to sneak in underhanded connotations (or, potentially, to demand that they not be brought in). There is no argument. Whatever the Contextualists wants to mean by “know” can be called “to flozzlebait”, and whatever the Invariantists wants to mean by it can be called “to mankieinate”. There, now that they both understand each other, they can resolve their argument… If there ever even was one (which I doubt).
1: The Foundationalists have claimed probability is off the metaphorical table- the concept of probability rests either on subjective feeling (irrational) or on empirical evidence(circular, as our belief in empirical evidence rests on the assumption it is probable). They had problems with self-evident, but I created a new definition as “Must be true in any possible universe” (although I’m not sure of the truth of his conclusion, the way EliIizer describes a non-reductionist universe basically claims for reductionism this sort of self-evidency).
2: Doesn’t solve the problem I have with it.
3: Of the statement “A trout is a type of fish”, the simplification “This statement is true in English” is good enough to describe reality. The invariantist, and likely the contextualist, would claim that universally, across languages, humans have a concept of “knows”, however they describe it, which fits their philosophy.
You’re right, my statement was far too strong, and I hereby retract it. Instead, I claim that philosophy which is not firmly grounded in the real world such that it effectively becomes another discipline is worthless. A philosophy book is unlikely to contain very much of value, but a cognitive science book which touches on ideas from philosophy is more valuable than one which doesn’t. The problem is that most philosophy is just attempts to argue for things that sound nice, logically, with not a care for their actual value. Philosophy is not entirely worthless, since it forms the backbone of rationality, but the problem is the useful parts are almost all settled questions (and the ones that aren’t are effectively the grounds of science, not abstract discussion). We already know how to form beliefs that work in the real world, justified by the fact that they work in the real world.. We already know how to get to the most basic form of rationality from whence we can then use the tools recursively to improve them. We know how to integrate new science into our belief structure. The major thing which has traditionally been a philosophical question which we still don’t have an answer to, namely morality, is fundamentally reduced to an empirical question: what do humans in fact value? We already know that morality as we generally imagine it is a fundamentally a flawed concept, since there are no moral laws which bind us from the outside, but just the fact that we value some things that aren’t just us and our tribe. The field is effectively empty of useful open questions (the justification of priors is one of the few relevant ones remaining, but it’s also one which doesn’t help us in real life much).
Basically, whether philosophers dispute something is essentially un-correlated to whether there is a clear answer on it or not. If you want to know truth, don’t talk to a philosopher. If you pick your beliefs based on strength of human arguments, you’re going to believe whatever the most persuasive person believes, and there’s only weak evidence that that should correlate with truth. Sure, philosophy feeds into rationality and cog-sci and mathematics, but if you want to figure out which parts do so in a useful way, go study those fields. The problem with philosophy as a field is not the questions it asks but the way it answers them; there is no force that drives philosophers to accept correct arguments that they don’t like, so they all believe whatever they want to believe (and everyone says that’s ok). I mean, anti-reductionism? Epiphenomenalism? This stuff is maybe a little better than religious nonsense, but it still deserves to be laughed at, not taken as a serious opponent. My problem is not the fundamentals of the field, but the way it exists in the real world.
If you judge philosophy by what helps us in the empirical world, this is mostly correct. The importance of rationality to philosophy (granted the existence of an empirical world) I also agree with. However, some people want to know the true answers to these questions, useful or not. For that, argument is all we’ve got.
I would mostly agree with rationality training for philosophers, except in that there is something both circular and silly about using empirical data to influence, if indirectly, discussions on if the empirical world exists.
Super quick and dirty response: I believe it exists, you believe it exists, and everyone you’ve ever spoken to believes it exists. You have massive evidence that it exists in the form of memories which seem far more likely to come from it actually existing than any other possibility. Is there a chance we’re all wrong (or that you’re hallucinating the rest of us, etc.)? Of course. There always is. If someone demands proof that it exists, they will be disappointed—there is no such thing as irrefutable truth. Not even “a priori” logic—not only could you be mistaken, but additionally your thoughts are physical, empirical phenomena, so you can’t take their existence as granted while denying the physical world the same status.
If anyone really truly believes that the empirical world doesn’t exist, you haven’t heard from them. They might believe that they believed it, but to truly believe that it doesn’t exist, or even simply that we have no evidence either way and it’s therefore a tossup, they won’t bother arguing about it (it’s as likely to cause harm as good). They’ll pick their actions completely at random, and probably die because “eat” never came up on their list. If anyone truly thinks that the status of the physical world is questionable, as a serious position, I’d like to meet them. I’d also like to get them help, because they are clinically insane (that’s what we call people who can’t connect to reality on some level).
Basically, the whole discussion is moot. There is no reason for me to deny the existence of what I see, nor for you to do so, nor anyone else having the discussion. Reality exists, and that is true, whether or not you can argue a rock into believing it. I don’t care what rocks, or neutral judges, or anyone like that believes. I care about what I believe and what other humans and human-like things believe. That’s why philosophy in that manner is worthless—it’s all about argumentation, persuasion, and social rules, not about seeking truth.
Your argument is about as valid as “Take it on faith”. Unless appealing to pragmatism, your argument is circular in using the belief of others when you can’t justifiably assume their existence. Second, your argument is irrational in that it appeals to “Everybody believes X” to support X. Thirdly, a source claiming X to be so is only evidence for X being so if you have reason to consider the source reliable.
You are also mixing up “epistemic order” with “empirical order”, to frame two new concepts. “Epistemic order” represents orders of inference- if I infer A from B and B from C, then C is prior to B and B is prior to A in epistemic order regardless of the real-world relation of whatever they are. “Empirical order”, of course, represents what is the empirical cause of what (if indeed anything causes anything).
A person detects their own thoughts in a different way from the way they detect their own senses, so they are unrelated in epistemic order. You raise a valid point about assuming that one’s thoughts really are one’s thoughts, but unless resorting to the Memory Argument (which is part of the Evil Demon argument I discussed) they are at least avaliable as arguments to consider.
The Foundationalist skeptic is arguing that believing in the existence of the world IS IRRATIONAL. Without resorting to the arguments I describe in the first post, there seems to be no way to get around this. Pragmatics clearly isn’t one, after all.
1: Occam’s Razor has already been covered. The concept inherently rests (unless you take William of Ockham’s original version, which cannot be applied in the same way) on empirical observations about the world- which are the things under doubt.
2: The argument started on if it is rational to trust the senses, and turned into an argument about the proper rules to decide that question. Such a question cannot be solved empirically. Besides, such a rule cannot justify itself as it is not empirically rooted.
3: I considered this possibility, but wasn’t confident enough to claim it because rarely, despite the nature of human concepts, a simplistic explanation actually works. For example, that “a trout is a type of fish” is true as a linguistic statement, no clarification or deeper understanding of the human mind required.
My mind is good at Verbal Comphrehension skills, such as Philosophy and Law. To get into Law at Melbourne, I need to get good marks. Philosophy is a subject at which I get good marks, and fun because of how my brain works, so I do it. I take a genuine interest because I like the intellectual stimulation and I want to be right about the sort of things philosophy covers.
Deferring to a simplicity prior is good for the outside world, but also raises the question of where you got your laws of thought and your assumption of simplicity. At some point you do need to say “okay, that’s good enough,” because it’s always possible to have started from the wrong thoughts.
Explanations aren’t first and foremost about what the world is like. They’re about what we find satisfying. It’s like how people keep trying to explain quantum mechanics in terms of balls and springs—it’s not because balls and springs are inherently better, it’s because we find them satisfying enough to us that once we explain the world in terms of them we can say “okay, that’s good enough.”
Philosophical Infinitism in a nutshell (the conclusions, not the argument line which seems unusual as fa as I can tell).
Anyway, the Coherentists would say that you can simply go around in circles for justification (factoring for “webbiness”, whilst the Foundationalist skeptics would say that this supports the view that belief in the existence of the world is inherently irrational. Just because something is satisfying doesn’t mean it has any correlation with reality.
The truth is consistent, but not all consistent things are true. So yeah.
I think the viewpoint that it’s not only necessary but okay to have unjustified fundamental assumptions relies on fairly recent stuff. Aristotle could probably tell you why it was necessary (it’s just an information-theoretic first cause argument after all), but wouldn’t have thought it was okay, and would have been motivated to reach another conclusion.
It’s like I said about explanations. Once you know that humans are accidental physical processes, that all sorts of minds are possible, and some of them will be wrong, and that’s just how it is, then maybe you can get around to thinking it’s okay for us humans, who are after all just smart meat, to just accept some stuff to get started. The reason that we don’t fall apart into fundamentally irreconcilable worldviews isn’t magic, it’s just the fact that we’re all pretty similar, having been molded by the constraints of reality.
That the empirical world exists is a supposition you were born into. The argument is over whether that’s satisfying enough to be called an explanation.
My previous reply wasn’t very helpful, sorry. Let me reiterate what I said above: making assumptions isn’t so much rational as unavoidable. And so you ask “then, should we believe in the external world?”
Well, this question has two answers. The first is that there is no argument that will convince an agent who didn’t make any assumptions that they should believe in an external world. In fact, there is no truth so self-evident it can convince any reasoner. For an illustration of this, see What the Tortoise Said to Achilles. Thus, from a perspective that makes no assumptions, no assumption is particularly better than another.
There is a problem with the first answer, though. This is that “the perspective that makes no assumptions” is the epistemological equivalent of someone with a rock in their head. It’s even worse than the tortoise—it can’t talk, it can’t reason, because it doesn’t assume even provisionally that the external world exists or that (A and A->B) → B. You can’t convince it of anything not because all positions are unworthy, but because there’s no point trying to convince a rock.
The second answer is that of course you should believe in the external world, and common sense, and all that good stuff. Now, you may say “but you’re using your admittedly biased brain to say that, so it’s no good,” but, I ask you, what else should I use? My kidneys?
If you prefer a slightly more sophisticated treatment, consider different agents interpreting “should we believe in the external world” with different meanings of the word “should”. We can call ours human_should, and yes, you human_should believe in the external world. But the word no_assumptions_should does not, in fact, have a definition, because the agent with no assumptions, the guy with a rock in his head, does not assume up any standards to judge actions with. Lacking this alternative, the human_reasonable course of action is to interpret your question as “”human_should we believe in the external world,” to which the answer is yes.
The second answer is that of course you should believe in the external world, and common sense, and all that good stuff. Now, you may say “but you’re using your admittedly biased brain to say that, so it’s no good,” but, I ask you, what else should I use? My kidneys?
This is the place to whip out the farmer/directions joke. The one that ends, “you just can’t get there from here.”
I’d already considered the “What the Tortoise said to Achilles” argument in a different form. I’d gotten around it (I was arguing Foundationalism until now, remember) by redefining self-evident as:
What must be true in any possible universe.
If a truth is self-evident, then a universe where it was false simply COULD NOT EXIST for one reason or another. Elizier has described a non-Reductionist universe the way I believe a legitimate self-evident truth (by this definition) should be described. To those who object, I call it self-evident’ (self evident dash, as I say it in normal conversation) and use it instead of self-evident as a basis for justification.
The Foundationalist skeptics in the debate would laugh at your argument, point out you can’t even assume the existence of a brain with justification, nor the existence of “should” either in the human sense or any other. Thus your argument falls apart.
I agree with the foundationalist skeptics, except for that anything “falls apart” is, of course, something that they just assume without justification, and should be discarded :)
Self-evident from the definition of rational: It is irrational to believe a proposistion if you have no evidence for or against it.
Empirical evidence is not evidence if you have no reason to trust it. Therefore, the fact that your argument falls apart is self-evident given the premises and conclusions therein.
The “definition of rational” is already without foundation—see again What the Tortoise Said to Achilles, and No Universally Convincing Arguments.
Or perhaps I’m overestimating how skeptical normal skepticism is? Is it normal for foundationalist skeptics to say that there’s no reason to believe the external world, but that we have to follow certain laws of thought “by definition,” and thus be unable to believe the Tortoise could exist? That’s not a rhetorical question, I’m pretty ignorant about this stuff.
I’ve already gotten past the arguments in those two cases by redefining self-evident by reference to what must be true in any possible universe. Elizier himself describes reductionism in a way which fits my new idea of self-evident. The Foundationalist skeptics agree with me. As for the definition of rational, if you understand nominalism you will see why the definition is beyond dispute.
The Foundationalist Skeptic supports starting from no assumptions except those that can be demonstrated to be self-evident.
The Foundationalist Skeptic supports starting from no assumptions except those that can be demonstrated to be self-evident.
So, you agree that the Foundationalist Skeptic rejects the use of modus ponens, since Achilles cannot possibly convince the Tortoise to use it?
Also, I recommend this post. You seem to be roving into that territory. And calling anything, even modus ponens, “beyond dispute” only works within a certain framework of what is disputable—someone with a different framework (the tortoise) may think their framework is beyond dispute. In short, the reflective equilibrium of minds does not have just one stable point.
Just to remind you, I am not TECHNICALLY arguing for Foundationalist skepticism here. My argument is that it doesn’t have any major weaknesses OTHER THAN the ones I’ve already mentioned.
Regarding the use of modus ponens, that WAS a problem until I redefined self-evident to refer to what must be true in any possible universe. This is a mind-independent definition of self-evident.
I suspect a Foundationalist skeptic shouldn’t engage with Elizier’s arguments in this case as it appeals to empirical evidence, but leaving that aside the ordinary definition of ‘rational’ contains a contradiction. In ordinary cases of “rationality”, if somebody claims A because of X and are asked “Why should I trust X?”, the claimer is expected to have an answer for why X is trustworthy.
The four possible solutions to this are Weak Foundationalism (end up in first causes they can’t justify), Infinitism(infinite regress), Coherentism(believe because knowledge coheres), and Strong Foundationalism. This excludes appealing to Common Sense, as Common Sense is both incoherent and commonly considered incompatible with Rationality.
A Weak Foundationalist is challengable on privledging their starting points, plus the fact that any reason they give for privledging said starting points is itself a reason for their starting point and hence another stage back. Infinitism and Coherentism have the problem that without a first cause we have no reason to believe they cohere with reality. This leaves Strong Foundationalism by default.
self-evident to refer to what must be true in any possible universe. This is a mind-independent definition of self-evident.
So why doesn’t the Tortoise agree that modus ponens is true in any possible universe? Do you have some special access to truth that the Tortoise doesn’t? If you don’t, isn’t this just an unusual Neurathian vessel of the nautical kind?
What the Tortoise believes is irrelevant. In any universe whatsoever, proper modus ponens will work. Another way of showing is that a universe where it doesn’t work would be internally incoherent. Arguments are mind-independent- whether my mind has a special acess to truth or not (theoretically, I may simply have gotten it right this time and this time only), my arguments are just as valid.
Elizier is right to say that you can’t argue with a rock. However, insane individuals who disagree in the Tortoise case are irrelevant because the reasoning is not based on universial agreement of first premises but the fact that in any possible universe the premises must be true.
I agree—modus ponens works, even though there are some minds who will reject it with internally coherent criteria. Even criteria as simple as “modus ponens works, except when it will lead to belief in the primness of 7 being added to your belief pool”—this definition defends itself because if it was wrong, you could prove 7 was prime, therefore it’s not wrong.
You could be put in a room with one off these 7-denialists, and no argument you made could convince them that they had the wrong form of modus ponens, and you had the right one.
But try seeing it from their perspective. To them, 7 not being prime is just how it is. To them, you’re the 7-denialist, and they’ve been put in a room with you, yet are unable to convince you that you have the wrong form of modus ponens, and they have the right one.
Suppose you try to show that a universe where 7 isn’t prime is internally inconsistent. What would the proof look like? Well, it would look like some axioms of arithmetic, which you and the 7-denialists share. Then you’d apply modus ponens to these axioms, until you reached the conclusion that 7 is prime, and thus any system with “7 is not prime” added to the basic axioms would be inconsistent.
What would the 7-denialist you’re in a room with say to that? I think it’s pretty clear—they’d say that you’re making a very elementary mistake, you’re just applying modus ponens wrong. In the step where you go from 7 not being factorable into 2, 3, 4, 5 or 6, to 7 being prime, you’ve committed a logical fallacy, and have not shown that 7 is prime from the basic axioms. Therefore you cannot rule out that 7 is not prime, and your version of modus ponens is therefore not true in every possible universe.
Just because you can use something to prove itself, they say, doesn’t mean it’s right in every possible universe. You should try to be a little more cosmopolitan and seriously consider that 7 isn’t prime.
I’m guessing you disagree with Elizier’s thoughts on Reductionism, then?
The 7-denialists are making a circular argument with your first defence of their posistion. Circular arguments aren’t self-evidently wrong, but they are self-evidently not evidence as there isn’t justification for believing any of them. The argument for conventional modus ponens is not a circular argument.
The second argument would be that the 7-denialists are making an additional assumption they haven’t proven, whilst the Foundationalist Skeptic starts with no assumptions. That there is an inconsistency in 7 being prime needs demonstrating, after all. If you redefine Prime to exclude 7 then it is strictly correct and we don’t have a disagreement, but we don’t need a different logic for that. (And the standard defintition of Prime is more mathematically useful)
Finally, the Foundationalist Skeptic would argue that they aren’t using something to prove itself- they are starting from no starting assumptions whatsoever. I have concluded, as I mentioned, that there is a problem with their posistion, but not the one you claim.
Warning: I am not a philosophy student and haven’t the slightest clue what any of your terms mean. That said, I can still answer your questions.
1) Occam’s Razor to the rescue! If you distribute your priors according to complexity and update on evidence using Bayes’ Theorum, then you’re entirely done. There’s nothing else you can do. Sure, if you’re unlucky then you’ll get very wrong beliefs, but what are the odds of a demon messing with your observations? Pretty low, compared to the much simpler explanation that what you think you see correlates well to the world around you. One and zero are not probabilities; you are never certain of anything, even those things you’re probably getting used to calling a priori truths. Learn to abandon your intuitions about certainty; even if you could be certain of something, our default intuitions will lead us to make bad bets when certainty is involved, so there’s nothing there worth holding on to. In any case, the right answer is understanding that beliefs are always always always uncertain. I’m pretty sure that 2 + 2 = 4, but I could be convinced otherwise by an overwhelming mountain of evidence.
2) I don’t know what question is being asked here, but if it has no possible impact on the real world then you can’t decide if it’s true or false. Look at Bayes’ Theorem; if probability (evidence given statement) is equal to probability (evidence) then your final belief is the same as your prior. If there is in principle no excitement you could run which would give you evidence for or against it, then the question is not really a question; knowing it was true or false would tell you nothing about which possible world you live in; it would not let you update your map. It is not merely useless but fundamentally not in the same class of statements as things like “are apples yellow?” or “should machines have legal rights, given “should” referring to generalized human preferences?” If there is an experiment you could run in principle, and knowing whether the statement is true or false would tell you something, then you simply have to refer to Occam’s Razor to find your prior. You won’t necessarily get an answer that’s firmly one way or another, but you might.
3) I’ll admit I had to look this up to give an answer. What I found was that there is literally not a question here. Go read A Human’s Guide to Words (sequence on LW) to understand why, although I’ll give a brief explanation. “Knowledge”, the word, is not a fundamental thing. Nowhere is there inscribed on the Almighty Rock of Knowledge that “knowledge” means “justified true belief” or “correctly assigned >90% certainty” or “things the Flying Spaghetti Monster told you.” It only has meaning as a symbol that we humans can use to communicate. If I made it clear that I was going to use the phrase “know x” to mean “ate x for breakfast”, and then said “I know a chicken biscuit”, I would be commiting an error; but that error would have nothing to do with the true meaning of “know”. When I say “I know that the earth is not flat”, I mean that I have seen pretty strong evidence that the earth really isn’t flat, such that for it to be flat would require a severe mental break on my part or other similarly unlikely circumstances. I don’t know it with certainty; I don’t know anything with certainty. But that’s not what “know” means in the minds of most people I speak with, so I can say “I know the world is not flat” and everyone around me gets the right idea. There is no such thing as a correct attribution of knowledge, nor an incorrect one, because knowledge is not a fundamental thing nor sharply defined, but instead it’s a fuzzy shape in conceptspace which corresponds to some human intuitions about the world but not to the actual territory. Humans are biased towards concrete true/false dichotomies, but that’s not how the real world works. Once you realize that beliefs are probabilities you’ll realize how incredibly silly most philosophical discussions of knowledge are.
My quick advice to you in general (so that you can solve future problems like this on your own) is three-fold. First, learn Bayes and keep it close to you at all times. The Twelve Virtues of Rationality are nice for a way to remind yourself what it means to want to actually get the right answer. Second, read A Human’s Guide to Words, and in particular play Rationalist Taboo constantly. Play it with yourself before you speak and with others when they use words like “knowledge” or “free will”. Do not simply accept a vague intuition; play it until you’re certain of what you mean (and it matches what you meant when you first said it), or certain that you have no idea. Pro tip: free will sounds like a pretty simple concept, but you have no idea how to specify it other than that thing that you can feel you have. (And any other specification fails to capture what you or anybody else really want to talk about). Third, and I’m sure some people will disagree here, but… Get the heck out of philosophy. There is almost nothing of value that you’ll get from the field. Almost all of it is trash, because there really aren’t enough interesting questions that don’t require you to actually go out and do /gasp/ science to justify an entire field. Pretty much all the important ones have answers already, although you wouldn’t know that by talking to philosophers. Philosophy was worthwhile in Ancient Greece when “philosopher” meant “aspiring rationalist” and human knowledge was at the stage of gods controlling everything, but in the modern day we already have the basic rationalist’s toolkit available for mass consumption. Any serious advance made in the Art will come from needing it to do something that the Art you were taught couldn’t do for you, and such advances are what philosophy should be, but isn’t, providing. You won’t find need of a new rationalist Art if you’re trying to convince other people, who by definition do not already have this new Art, of some position that you stumbled upon because of other people who argued it convincingly to you. If you care about the human state of knowledge, go into any scientific discipline. Otherwise just pick literally anything else. There’s nothing for you in philosophy except for a whole lot of confused words.
Ok, response here from somebody who has studied philosophy. I disagree with a lot of what DSherron said, but on one point we agree—don’t get a philosophy degree. Take some electives, sure—that’ll give you an introduction to the field—but after that there’s absolutely no reason to pay for a philosophy degree. If you’re interested in it, you can learn just as much by reading in your spare time for FREE. I regret my philosophy degree.
So, now that that’s out of the way: philosophy isn’t useless. In fact, at its more useful end it blurs pretty seamlessly into mathematics). It’s also relevant to cognitive science), and in fact science in general. The only time philosophy is useless is when it isn’t being used to do anything. So, sure, pure philosophy is useless, but that’s like saying “pure rationality is useless”. We use rationality in combination with every other discipline, that’s the point of rationality.
As for the OP’s questions:
DSherron suggests following the method of the 13th century philosopher William of Ockham, but I don’t think that’s relevant to the question. As far as I can tell, ALL justificatory systems suffer from Munchausen’s Trilemma. Given that, Foundationalism and Coherentism seem to me to be pretty much equivalent. You wouldn’t pick incoherent axioms as your foundations, and conversely any coherent system of justifications should be decomposable into an orthogonal set of fundamental axioms and theorems derived thereof. Maybe there’s something I’m missing, though.
DSherron’s point is a good one. It was first formalised by the philosopher-mathematician Leibniz who proposed the principle of the Identity of Indiscernibles.
DSherron suggests that the LW sequence “A Human’s Guide to Words” is relevant here. Since that sequence is basically a huge discussion of the philosophy of language, and makes dozens of philosophical arguments aimed at correcting philosophical errors, I agree that it is a useful resource.
I’m doing a philosophy degree for two reasons. The first is that I enjoy philosophy (and a philosophy degree gives me plenty of opportunities to discuss it with others). The second is that Philosophy is my best prospect of getting the marks I need to get into a Law course. Both of these are fundamentally pragmatic.
1: Any Coherentist system could be remade as a Weak Foundationalist system, but the Weak Foundationalist would be asked why they give their starting axioms special priviledges (hence both sides of my discussion have dissed on them massively).
The Coherentists in the argument have gone to great pains to say that “consistency” and “coherence” are different things- their idea of coherence is complicated, but basically involves judging any belief by how well interconnected it is with other beliefs. The Foundationalists have said that although they ultimately resort to axioms, those axioms are self-evident axioms that any system must accept.
2: Could you clarify this point please? Superficially it seems contradictory (as it is a principle that cannot be demonstrated empirically itself), but I’m presumably missing something.
3: About the basic philosophy of language I agree. What I need here is empirical evidence to show that this applies specifically to the Contextualist v.s Invariantist question.
For 1) the answer is basically to figure out what bets you’re willing to make. You don’t know anything, for strong definitions of know. Absolutely nothing, not one single thing, and there is no possible way to prove anything without already knowing something. But here’s the catch; beliefs are probabilities. You can say “I don’t know that I’m not going to be burned at the stake for writing on Less Wrong” while also saying “but I probably won’t be”. You have to make a decision; choose your priors. You can pick ones at random, or you can pick ones that seem like they work to accomplish your real goals in the real world; I can’t technically fault you for priors, but then again justification to other humans isn’t really the point. I’m not sure how exactly Coherentists think they can arrive at any beliefs whatsoever without taking some arbitrary ones to start with, and I’m not sure how anyone thinks that any beliefs are “self-evident”. You can choose whatever priors you want, I guess, but if you choose any really weird ones let me know, because I’d like to make some bets with you… We live in a low-entropy universe; simple explanations exist. You can dispute how I know that, but if you truly believed any differently then you should be making bets left and right and winning against anyone who thought something silly like that a coin would stay 50⁄50 just because it usually does. Basically, you can’t argue anything to an ideal philosopher of perfect emptiness, any more than you can argue anything to a rock. If you refuse to accept anything, then you can go do whatever you want (or perhaps you can’t, since you don’t know what you want), and I’ll get on with the whole living thing over here. You should read “The Simple Truth”; it’s a nice exploration of some of these ideas. You can’t justify knowledge, at all, and there’s no difference between claiming an arbitrary set of axioms and an arbitrary set of starting beliefs (they are literally the same thing), but you can still count sheep, if you really want to. 2) is mostly contained in 1), I think.
3) Why do you need empirical evidence? What could that possibly show you? I guess you could theoretically get a bunch of Contextualista and Invariantists together and show that most of them think that “know” has a fundamental meaning, but that’s only evidence that those people are silly. Words are not special. To draw from your lower comment to me, “a trout is a type of fish” is not fundamentally true, linguistically or otherwise. It is true when you, as an English speaker, say it in an English forum, read by English speakers. Is “Фольре є омдни з дівви риб” a linguistic truth? That’s (probably) the same sentence in a language picked at random off Google Translate. So, is it true? Answer before you continue reading. Actually, I lied. That sentence is gibberish; I moved the letters around. A native speaker of that language would have told you it was clearly not true. But you had no idea whether it was or wasn’t; you don’t speak that language, and for that matter neither do I. I could have just written profanity for all I know. But the meanings are not fundamental to the little squiggles on your computer screen; they are in your mind. Words are just mental paintbrush handles, and with them we can draw pictures in each other’s minds, similar to those in our own. If you knew that I had had some kind of neurological malfunction such that I associated the word “trout” to a mental image of a moderately sized land-bound mammal, and I said “a trout is a type of fish”, you would know that I was wrong (and possibly confused about what fish were). If you told me “a trout is a type of fish”, without clarifying that your idea of trout was different from mine, you’d be lying. Words do not have meanings; they are simply convenient mental handles to paint broad pictures in each other’s minds. “Know” is exactly the same way. There is no true, really real more real than that other one meaning of “know”, just the broad pictures that the word can paint in minds. The only reason anyone argues over definitions is to sneak in underhanded connotations (or, potentially, to demand that they not be brought in). There is no argument. Whatever the Contextualists wants to mean by “know” can be called “to flozzlebait”, and whatever the Invariantists wants to mean by it can be called “to mankieinate”. There, now that they both understand each other, they can resolve their argument… If there ever even was one (which I doubt).
1: The Foundationalists have claimed probability is off the metaphorical table- the concept of probability rests either on subjective feeling (irrational) or on empirical evidence(circular, as our belief in empirical evidence rests on the assumption it is probable). They had problems with self-evident, but I created a new definition as “Must be true in any possible universe” (although I’m not sure of the truth of his conclusion, the way EliIizer describes a non-reductionist universe basically claims for reductionism this sort of self-evidency).
2: Doesn’t solve the problem I have with it.
3: Of the statement “A trout is a type of fish”, the simplification “This statement is true in English” is good enough to describe reality. The invariantist, and likely the contextualist, would claim that universally, across languages, humans have a concept of “knows”, however they describe it, which fits their philosophy.
You’re right, my statement was far too strong, and I hereby retract it. Instead, I claim that philosophy which is not firmly grounded in the real world such that it effectively becomes another discipline is worthless. A philosophy book is unlikely to contain very much of value, but a cognitive science book which touches on ideas from philosophy is more valuable than one which doesn’t. The problem is that most philosophy is just attempts to argue for things that sound nice, logically, with not a care for their actual value. Philosophy is not entirely worthless, since it forms the backbone of rationality, but the problem is the useful parts are almost all settled questions (and the ones that aren’t are effectively the grounds of science, not abstract discussion). We already know how to form beliefs that work in the real world, justified by the fact that they work in the real world.. We already know how to get to the most basic form of rationality from whence we can then use the tools recursively to improve them. We know how to integrate new science into our belief structure. The major thing which has traditionally been a philosophical question which we still don’t have an answer to, namely morality, is fundamentally reduced to an empirical question: what do humans in fact value? We already know that morality as we generally imagine it is a fundamentally a flawed concept, since there are no moral laws which bind us from the outside, but just the fact that we value some things that aren’t just us and our tribe. The field is effectively empty of useful open questions (the justification of priors is one of the few relevant ones remaining, but it’s also one which doesn’t help us in real life much).
Basically, whether philosophers dispute something is essentially un-correlated to whether there is a clear answer on it or not. If you want to know truth, don’t talk to a philosopher. If you pick your beliefs based on strength of human arguments, you’re going to believe whatever the most persuasive person believes, and there’s only weak evidence that that should correlate with truth. Sure, philosophy feeds into rationality and cog-sci and mathematics, but if you want to figure out which parts do so in a useful way, go study those fields. The problem with philosophy as a field is not the questions it asks but the way it answers them; there is no force that drives philosophers to accept correct arguments that they don’t like, so they all believe whatever they want to believe (and everyone says that’s ok). I mean, anti-reductionism? Epiphenomenalism? This stuff is maybe a little better than religious nonsense, but it still deserves to be laughed at, not taken as a serious opponent. My problem is not the fundamentals of the field, but the way it exists in the real world.
If you judge philosophy by what helps us in the empirical world, this is mostly correct. The importance of rationality to philosophy (granted the existence of an empirical world) I also agree with. However, some people want to know the true answers to these questions, useful or not. For that, argument is all we’ve got.
I would mostly agree with rationality training for philosophers, except in that there is something both circular and silly about using empirical data to influence, if indirectly, discussions on if the empirical world exists.
Super quick and dirty response: I believe it exists, you believe it exists, and everyone you’ve ever spoken to believes it exists. You have massive evidence that it exists in the form of memories which seem far more likely to come from it actually existing than any other possibility. Is there a chance we’re all wrong (or that you’re hallucinating the rest of us, etc.)? Of course. There always is. If someone demands proof that it exists, they will be disappointed—there is no such thing as irrefutable truth. Not even “a priori” logic—not only could you be mistaken, but additionally your thoughts are physical, empirical phenomena, so you can’t take their existence as granted while denying the physical world the same status.
If anyone really truly believes that the empirical world doesn’t exist, you haven’t heard from them. They might believe that they believed it, but to truly believe that it doesn’t exist, or even simply that we have no evidence either way and it’s therefore a tossup, they won’t bother arguing about it (it’s as likely to cause harm as good). They’ll pick their actions completely at random, and probably die because “eat” never came up on their list. If anyone truly thinks that the status of the physical world is questionable, as a serious position, I’d like to meet them. I’d also like to get them help, because they are clinically insane (that’s what we call people who can’t connect to reality on some level).
Basically, the whole discussion is moot. There is no reason for me to deny the existence of what I see, nor for you to do so, nor anyone else having the discussion. Reality exists, and that is true, whether or not you can argue a rock into believing it. I don’t care what rocks, or neutral judges, or anyone like that believes. I care about what I believe and what other humans and human-like things believe. That’s why philosophy in that manner is worthless—it’s all about argumentation, persuasion, and social rules, not about seeking truth.
Your argument is about as valid as “Take it on faith”. Unless appealing to pragmatism, your argument is circular in using the belief of others when you can’t justifiably assume their existence. Second, your argument is irrational in that it appeals to “Everybody believes X” to support X. Thirdly, a source claiming X to be so is only evidence for X being so if you have reason to consider the source reliable.
You are also mixing up “epistemic order” with “empirical order”, to frame two new concepts. “Epistemic order” represents orders of inference- if I infer A from B and B from C, then C is prior to B and B is prior to A in epistemic order regardless of the real-world relation of whatever they are. “Empirical order”, of course, represents what is the empirical cause of what (if indeed anything causes anything).
A person detects their own thoughts in a different way from the way they detect their own senses, so they are unrelated in epistemic order. You raise a valid point about assuming that one’s thoughts really are one’s thoughts, but unless resorting to the Memory Argument (which is part of the Evil Demon argument I discussed) they are at least avaliable as arguments to consider.
The Foundationalist skeptic is arguing that believing in the existence of the world IS IRRATIONAL. Without resorting to the arguments I describe in the first post, there seems to be no way to get around this. Pragmatics clearly isn’t one, after all.
1: Occam’s Razor has already been covered. The concept inherently rests (unless you take William of Ockham’s original version, which cannot be applied in the same way) on empirical observations about the world- which are the things under doubt.
2: The argument started on if it is rational to trust the senses, and turned into an argument about the proper rules to decide that question. Such a question cannot be solved empirically. Besides, such a rule cannot justify itself as it is not empirically rooted.
3: I considered this possibility, but wasn’t confident enough to claim it because rarely, despite the nature of human concepts, a simplistic explanation actually works. For example, that “a trout is a type of fish” is true as a linguistic statement, no clarification or deeper understanding of the human mind required.
My mind is good at Verbal Comphrehension skills, such as Philosophy and Law. To get into Law at Melbourne, I need to get good marks. Philosophy is a subject at which I get good marks, and fun because of how my brain works, so I do it. I take a genuine interest because I like the intellectual stimulation and I want to be right about the sort of things philosophy covers.
Deferring to a simplicity prior is good for the outside world, but also raises the question of where you got your laws of thought and your assumption of simplicity. At some point you do need to say “okay, that’s good enough,” because it’s always possible to have started from the wrong thoughts.
Explanations aren’t first and foremost about what the world is like. They’re about what we find satisfying. It’s like how people keep trying to explain quantum mechanics in terms of balls and springs—it’s not because balls and springs are inherently better, it’s because we find them satisfying enough to us that once we explain the world in terms of them we can say “okay, that’s good enough.”
Philosophical Infinitism in a nutshell (the conclusions, not the argument line which seems unusual as fa as I can tell).
Anyway, the Coherentists would say that you can simply go around in circles for justification (factoring for “webbiness”, whilst the Foundationalist skeptics would say that this supports the view that belief in the existence of the world is inherently irrational. Just because something is satisfying doesn’t mean it has any correlation with reality.
The truth is consistent, but not all consistent things are true. So yeah.
I think the viewpoint that it’s not only necessary but okay to have unjustified fundamental assumptions relies on fairly recent stuff. Aristotle could probably tell you why it was necessary (it’s just an information-theoretic first cause argument after all), but wouldn’t have thought it was okay, and would have been motivated to reach another conclusion.
It’s like I said about explanations. Once you know that humans are accidental physical processes, that all sorts of minds are possible, and some of them will be wrong, and that’s just how it is, then maybe you can get around to thinking it’s okay for us humans, who are after all just smart meat, to just accept some stuff to get started. The reason that we don’t fall apart into fundamentally irreconcilable worldviews isn’t magic, it’s just the fact that we’re all pretty similar, having been molded by the constraints of reality.
The problem is that I can’t argue based on the existence of the empirical world when that is the very thing the argument is about.
That the empirical world exists is a supposition you were born into. The argument is over whether that’s satisfying enough to be called an explanation.
The argument is about whether the belief is rational or irrational. Discussing it in the manner you describe is off the point,
My previous reply wasn’t very helpful, sorry. Let me reiterate what I said above: making assumptions isn’t so much rational as unavoidable. And so you ask “then, should we believe in the external world?”
Well, this question has two answers. The first is that there is no argument that will convince an agent who didn’t make any assumptions that they should believe in an external world. In fact, there is no truth so self-evident it can convince any reasoner. For an illustration of this, see What the Tortoise Said to Achilles. Thus, from a perspective that makes no assumptions, no assumption is particularly better than another.
There is a problem with the first answer, though. This is that “the perspective that makes no assumptions” is the epistemological equivalent of someone with a rock in their head. It’s even worse than the tortoise—it can’t talk, it can’t reason, because it doesn’t assume even provisionally that the external world exists or that (A and A->B) → B. You can’t convince it of anything not because all positions are unworthy, but because there’s no point trying to convince a rock.
The second answer is that of course you should believe in the external world, and common sense, and all that good stuff. Now, you may say “but you’re using your admittedly biased brain to say that, so it’s no good,” but, I ask you, what else should I use? My kidneys?
If you prefer a slightly more sophisticated treatment, consider different agents interpreting “should we believe in the external world” with different meanings of the word “should”. We can call ours human_should, and yes, you human_should believe in the external world. But the word no_assumptions_should does not, in fact, have a definition, because the agent with no assumptions, the guy with a rock in his head, does not assume up any standards to judge actions with. Lacking this alternative, the human_reasonable course of action is to interpret your question as “”human_should we believe in the external world,” to which the answer is yes.
This is the place to whip out the farmer/directions joke. The one that ends, “you just can’t get there from here.”
“I say, farmer, you’re pretty close to a fool, ain’t’cha?”
“Yup, only this here fence between us.”
I’d already considered the “What the Tortoise said to Achilles” argument in a different form. I’d gotten around it (I was arguing Foundationalism until now, remember) by redefining self-evident as:
What must be true in any possible universe.
If a truth is self-evident, then a universe where it was false simply COULD NOT EXIST for one reason or another. Elizier has described a non-Reductionist universe the way I believe a legitimate self-evident truth (by this definition) should be described. To those who object, I call it self-evident’ (self evident dash, as I say it in normal conversation) and use it instead of self-evident as a basis for justification.
The Foundationalist skeptics in the debate would laugh at your argument, point out you can’t even assume the existence of a brain with justification, nor the existence of “should” either in the human sense or any other. Thus your argument falls apart.
I agree with the foundationalist skeptics, except for that anything “falls apart” is, of course, something that they just assume without justification, and should be discarded :)
Self-evident from the definition of rational: It is irrational to believe a proposistion if you have no evidence for or against it.
Empirical evidence is not evidence if you have no reason to trust it. Therefore, the fact that your argument falls apart is self-evident given the premises and conclusions therein.
The “definition of rational” is already without foundation—see again What the Tortoise Said to Achilles, and No Universally Convincing Arguments.
Or perhaps I’m overestimating how skeptical normal skepticism is? Is it normal for foundationalist skeptics to say that there’s no reason to believe the external world, but that we have to follow certain laws of thought “by definition,” and thus be unable to believe the Tortoise could exist? That’s not a rhetorical question, I’m pretty ignorant about this stuff.
I’ve already gotten past the arguments in those two cases by redefining self-evident by reference to what must be true in any possible universe. Elizier himself describes reductionism in a way which fits my new idea of self-evident. The Foundationalist skeptics agree with me. As for the definition of rational, if you understand nominalism you will see why the definition is beyond dispute.
The Foundationalist Skeptic supports starting from no assumptions except those that can be demonstrated to be self-evident.
So, you agree that the Foundationalist Skeptic rejects the use of modus ponens, since Achilles cannot possibly convince the Tortoise to use it?
Also, I recommend this post. You seem to be roving into that territory. And calling anything, even modus ponens, “beyond dispute” only works within a certain framework of what is disputable—someone with a different framework (the tortoise) may think their framework is beyond dispute. In short, the reflective equilibrium of minds does not have just one stable point.
Just to remind you, I am not TECHNICALLY arguing for Foundationalist skepticism here. My argument is that it doesn’t have any major weaknesses OTHER THAN the ones I’ve already mentioned.
Regarding the use of modus ponens, that WAS a problem until I redefined self-evident to refer to what must be true in any possible universe. This is a mind-independent definition of self-evident.
I suspect a Foundationalist skeptic shouldn’t engage with Elizier’s arguments in this case as it appeals to empirical evidence, but leaving that aside the ordinary definition of ‘rational’ contains a contradiction. In ordinary cases of “rationality”, if somebody claims A because of X and are asked “Why should I trust X?”, the claimer is expected to have an answer for why X is trustworthy.
The four possible solutions to this are Weak Foundationalism (end up in first causes they can’t justify), Infinitism(infinite regress), Coherentism(believe because knowledge coheres), and Strong Foundationalism. This excludes appealing to Common Sense, as Common Sense is both incoherent and commonly considered incompatible with Rationality.
A Weak Foundationalist is challengable on privledging their starting points, plus the fact that any reason they give for privledging said starting points is itself a reason for their starting point and hence another stage back. Infinitism and Coherentism have the problem that without a first cause we have no reason to believe they cohere with reality. This leaves Strong Foundationalism by default.
So why doesn’t the Tortoise agree that modus ponens is true in any possible universe? Do you have some special access to truth that the Tortoise doesn’t? If you don’t, isn’t this just an unusual Neurathian vessel of the nautical kind?
What the Tortoise believes is irrelevant. In any universe whatsoever, proper modus ponens will work. Another way of showing is that a universe where it doesn’t work would be internally incoherent. Arguments are mind-independent- whether my mind has a special acess to truth or not (theoretically, I may simply have gotten it right this time and this time only), my arguments are just as valid.
Elizier is right to say that you can’t argue with a rock. However, insane individuals who disagree in the Tortoise case are irrelevant because the reasoning is not based on universial agreement of first premises but the fact that in any possible universe the premises must be true.
I agree—modus ponens works, even though there are some minds who will reject it with internally coherent criteria. Even criteria as simple as “modus ponens works, except when it will lead to belief in the primness of 7 being added to your belief pool”—this definition defends itself because if it was wrong, you could prove 7 was prime, therefore it’s not wrong.
You could be put in a room with one off these 7-denialists, and no argument you made could convince them that they had the wrong form of modus ponens, and you had the right one.
But try seeing it from their perspective. To them, 7 not being prime is just how it is. To them, you’re the 7-denialist, and they’ve been put in a room with you, yet are unable to convince you that you have the wrong form of modus ponens, and they have the right one.
Suppose you try to show that a universe where 7 isn’t prime is internally inconsistent. What would the proof look like? Well, it would look like some axioms of arithmetic, which you and the 7-denialists share. Then you’d apply modus ponens to these axioms, until you reached the conclusion that 7 is prime, and thus any system with “7 is not prime” added to the basic axioms would be inconsistent.
What would the 7-denialist you’re in a room with say to that? I think it’s pretty clear—they’d say that you’re making a very elementary mistake, you’re just applying modus ponens wrong. In the step where you go from 7 not being factorable into 2, 3, 4, 5 or 6, to 7 being prime, you’ve committed a logical fallacy, and have not shown that 7 is prime from the basic axioms. Therefore you cannot rule out that 7 is not prime, and your version of modus ponens is therefore not true in every possible universe.
Just because you can use something to prove itself, they say, doesn’t mean it’s right in every possible universe. You should try to be a little more cosmopolitan and seriously consider that 7 isn’t prime.
I’m guessing you disagree with Elizier’s thoughts on Reductionism, then?
The 7-denialists are making a circular argument with your first defence of their posistion. Circular arguments aren’t self-evidently wrong, but they are self-evidently not evidence as there isn’t justification for believing any of them. The argument for conventional modus ponens is not a circular argument.
The second argument would be that the 7-denialists are making an additional assumption they haven’t proven, whilst the Foundationalist Skeptic starts with no assumptions. That there is an inconsistency in 7 being prime needs demonstrating, after all. If you redefine Prime to exclude 7 then it is strictly correct and we don’t have a disagreement, but we don’t need a different logic for that. (And the standard defintition of Prime is more mathematically useful)
Finally, the Foundationalist Skeptic would argue that they aren’t using something to prove itself- they are starting from no starting assumptions whatsoever. I have concluded, as I mentioned, that there is a problem with their posistion, but not the one you claim.
Well if you say so. Best of luck then.