Your first response seems fine to me. I’m generally not very good at picking up on subtle signals, but saying “I’m not interested, let’s be friends” or any variation thereof is as clear as day from any perspective I can successfully empathize with (which I know says more about me than about the workings of other humans). Certainly, I strongly oppose the notion (which I’ve seen several times in different contexts) that your statement was “too gentle”. Sure, your statement was gentle—and that’s great! You weren’t trying to be harsh, and you wanted to be friends, so your tone actually matched your intention in this case. The force with which a statement is delivered cannot override it’s content even had they not matched up, or were not understood to match up by the recipient. Just because you gave a nice “no” does not make that “no” any less important. Your response was sufficient to make your intention clear to a fairly strict standard, and the fact that it was ignored does not change that. I really don’t want to live in a world where everyone feels like they have to act less nice than they want to just to get their meaning across, and especially not in a world where you skip to the “damaging friendship” response. I am aware that a harsher response may tend to get you listened to more often, but that’s not universal and may come at a cost.
DSherron
I’m a useless social retard, and thus am no longer receiving signals that I’m just-barely-making-it. I am instead receiving I’m-fucked-and-need-to-self-terminate-for-the-sake-of-the-pack signals.
Um, no you’re not? This is the theory of Group Selectionism. There is no chemical signal that your brain will naturally produce corresponding to “suicide for the good of the pack”. You can arrive at that idea through other means, but there is almost certainly no low-level chemical signal which corresponds to suicide for the good of the group; everyone who might’ve passed that gene on to you died out for the sake of the people that didn’t have it.
Your error is related to the Mind Projection Fallacy. You are confusing the causes of us calling something depression with the actual causes of that depression. We identify depression based on the symptoms; if you have them then we say you’re depressed, if you don’t then we say that you are not. In neither case are we assuming that causality flows from our observations to our conclusions. The DSM definition just defines what we’re talking about with the word “depression”—what set of symptoms we want to refer to. But the symptoms are caused by something, physically, and therefore the depression is equally caused by that same thing physically. The symptoms cause us to call it depression, but they (tautologically under Aristotelian reasoning) cannot be the cause of the depression, since they are the depression.
Taboo compelling and restate. If compelling does not mean persuasive then what does it mean to you? Also taboo “committed” and “rational”—I think there’s a namespace conflict over your use of rational and the common Less Wrong usage, so restate using different terms. As a hint, try and imagine what a universally compelling argument would look like. What properties does it have? How do different minds react to understanding it, assuming they are capable of doing so? For bonus points explain what it means to be rationally committed to something (without using those words or synonyms).
Also worth noting: P1 is a generalization over statements about minds, not minds.
The problem here is that the second option you offer does nothing to explain what a compelling argument is; it just passes the recursive buck onto the word “committed”. I know you said you recognize that, but unless we can show that this line of reasoning is coherent (let alone leads to a relevant conclusion, let alone correct) then there’s no reason to assume that Eliezer’s point isn’t trivial in the end. Philosophers have believed a lot of silly things, after all. The only sensible resolution I can come up with is where you take “committed to x” to mean “would, on reflection and given sufficient (accurate) information and a great deal more intelligence, believe x”. The problem is that this is still trivially false in the entirety of mindspace. You might, although I doubt it, be able to establish a statement of that form over all humans (I think Eliezer disagrees with me on the likelihood here). You could certainly not establish one given a mindspace that includes both humans and paper clip maximizers.
I suspect that our beliefs are close enough to each other at this point that any perceived differences are as likely to be due to minor linguistic quibbles as to actual disagreement. Which is to say, I wouldn’t have phrased it like you did (had I said it with that phrasing I would disagree) but I think that our maps are closer than our wording would suggest.
If anyone who does think they have a coherent definition for UCMA that does not involve persuasiveness (subject to the above taboos) wants to chime in I’d love to hear it. Otherwise, I think the thread has reached its (happy) conclusion.
I’ll give it a shot: an argument is universally compelling if no mind both a) has reasons to reject it, and b) has coherent beliefs. This is to say that a mind can only believe that the argument is false by believing a contradiction.
I think this may sound stronger than it actually is, for the same reasons that you can’t convince an arbitrary mind who does not accept modus ponens that it is true.
More to the point, recall that one rationalist’s modus tollens is another’s modus ponens. This definition is defeated by any mind who possesses a strong prior that the given UCMA is false, and is willing to accept any and all consequences of that fact as true (even if doing so contradicts mathematical logic, Occam’s Razor, Bayes, or anything else we take for granted). This prior is a reason to reject the argument (every decision to accept or reject a conclusion can be reduced to a choice of priors), and since it is willing to abandon all beliefs which contradict its rejection it will not hold any contradictory beliefs. It’s worth noting that “contradiction” is a notion from formal logic which not all minds need to hold as true; this definition technically imposes a very strong restriction on the space of all minds which have to be persuaded. The law of non-contradiction (~(A ^ ~A) ) is a UCMA by definition under that requirement, even though I don’t hold that belief with certainty.
The arbitrary choice of priors, even for rational minds, actually appears to defeat any UCMA definition that does not beg the question. Of course, it is also true that any coherent definition begs the question one way or another (by defining which minds have to be persuaded such that it either demands certain arguments be accepted by all, or such that it does not). Now that I think about it, that’s the whole problem with the notion from the start. You have to define which minds have to be persuaded somewhere between a tape recorder shouting “2 + 2 = 5!” for eternity and including only your brain’s algorithm. And where you draw that line determines exactly which arguments, if any, are UCMAs.
And if you don’t have to persuade any minds, then I hesitate to permit you to call your argument “universally compelling” in any context where I can act to prevent it.
You’re just passing the recursive buck over to “rational”. Taboo rational, and see what you get out; I suspect it will be something along the lines of “minds that determine the right direction to shift the evidence in every case”, which, notably, doesn’t include humans even if you assume that there is an objectively decidable “rational” direction. There is no objectively determinable method to determine what the correct direction to shift is in any case; imagine an agent with anti-occamian priors, who believes that because the coin has come up heads 100 times in a row, it must be more likely to come up tails next time. It’s all a question of priors.
You’re confused about words; I recommend you read A Human’s Guide to Words, summarized by 37 Ways Words Can Be Wrong. I’ll try and give a quick explanation that will hopefully be helpful. Depression is not a low-level part of reality; it’s just a convenient label on our maps. The entire meaning—literally all of it, by the DSM definition—is that the person possess a certain number of symptoms from a list. If you know they express those symptoms they are depressed; if you know they are depressed you know they express those symptoms. That is, literally and entirely and without exception, everything that is true about the word depression as defined by DSM. There is no further question, no further information. There is no precedence, no ordering to the events between being DSM-depressed and having the symptoms. DSM-depression is in the map, not the territory, so there is no causality involved.
Actually, I’d like to put this metaphor in terms of 2 sets of maps. The first map just says “DSM-depressed” on a person. That map is compact; it enables compressed storage of lots of information, although it certainly is not lossless. When you pull that map out, and read it, and you know what DSM-depression means, you can then draw a second map. This map is a little bit more precise; it has a list of symptoms, and says they express some number of them. But you can’t then combine the maps, and write a single map which both contains the list of symptoms and the DSM-depressed tag. It would be redundant; there would be repeated information. The 2 maps are describing different levels of organization. It would be like looking at an airplane and saying “do the wings, engine, etc. cause this to be an airplane, or does the fact that it is an airplane cause the wings, engine, etc.” It is nonsense to ask the question; in the territory there is no “airplane” label, and for that matter no “wings” or “engine” labels either. Don’t confuse your map with a more detailed map, nor with the territory itself.
One other note is that you’re acting like the word “depression” has a meaning, no matter what a given definition defines it to mean. If I defined “depression” to mean “water”, and used it consistently, and made it clear to you what I meant, I would be committing an error with words; but that error would not be that “depression” doesn’t really mean “water”.
EDIT: Forgot to say this, but I’m tapping out. I’d recommend not clogging up the comment thread any more than we already have. If you still have questions feel free to PM me, and I will respond in more depth, but unless you’ve read the linked sequence and understood at least most of it (or put a genuine effort into trying) I’ll probably just point you back to it.
Be very, very cautious assigning probability 1 to the proposition that you even understand what the Law of Contradiction means. How confident are you that logic works like you think it works; that you’re not just spouting gibberish even though it seems from the inside to make sense. If you’d just had a major concussion, with severe but temporary brain damage, would you notice? Are you sure? After such damage you might claim that “if bananas then clocks” was true with certainty 1, and feel from the inside like you we’re making sense. Don’t just dismiss minds you can’t empathize with (meaning minds which you can’t model by tweaking simple parameters of your self-model) as not having subjective experiences that look, to them, exactly like yours do to you. You already know you’re running on corrupted hardware; you can’t be perfectly confident that it’s not malfunctioning, and if you don’t know that then you can’t assign probability 1 to anything (on pain of being unable to update later).
Again, though, you’ve defined the subspace of minds which have to be persuaded in a way which defines precisely which statements are UCAs. If you can draw useful inferences on that set of statements then go for it, but I don’t think you can. Particularly worth noting is that there’s no way any “should” statement can be a UCA because I can have any preferences I want and still fit the definition, but “should” statements always engage with preferences.
I’m not even 90% sure of that, but I am entirely certain that the LNC is true: suppose I were to come across evidence to the effect that the LNC is false. But in the case where the LNC is false, the evidence against it is also evidence for it. In fact, if the LNC is false, the LNC is provable, since anything is provable from a contradiction. So if its true, it’s true, and if it’s false, it’s true. So it’s true. This isn’t entirely uncontroversial, there is Graham Priest after all.
You say you’re not positive that you know how logic works, and then you go on to make an argument using logic for how you’re certain about one specific logical proposition. If you’re just confused and wrong, full stop, about how logic works then you can’t be sure of any specific piece of logic; you may just have an incomplete or outright flawed understanding. It’s unlikely, but not certain.
Also, you seem unduly concerned with pointing out that your arguments are not new. It’s not anti-productive, but neither is it particularly productive. Don’t take this as a criticism or argument, more of an observation that you might find relevant (or not).
The Categorical Imperative, in particular, is nonsense, in at least 2 ways. First, I don’t follow it, and have no incentive to do so. It basically says “always cooperate on the prisoner’s dilemma,” which is a terrible strategy (I want to cooperate iff my opponent will cooperate iff I cooperate). It’s hardly universally compelling since it carries neither a carrot nor a stick which could entice me to follow it. Second, an arbitrary agent need not care what other minds do. I could, easily, prefer that a) I maximize paperclips but b) all other agents maximize magnets. These are not instrumental goals; my real and salient terminal preferences are over the algorithms implemented not the outcomes (in this case). I should break the CI since what I want to do and what I want others to do are different.
Also, should statements are always descriptive, never prescriptive (as a consequence of what “should” means). You can’t propose a useful argument of the sort that says I should do x as a prescription. Rather you have to say that my preferences imply that I would prefer to do x. Should is a description of preferences. What would it even mean to say that I should do x, but that it wouldn’t make me happier or fulfill any other of my preferences, and I in fact will not do it? The word becomes entirely useless except as an invective.
I don’t really want to go into extreme detail on the issues with Kantian erhics; I’m relatively familiar with it after a friend of mine wrote a high school thesis on Kant, but it’s full of elementary mistakes. If you still think it’s got legs to stand I recommend reading some more of the sequences. Note that human morality is written nowhere except in our brains. I’m tapping out, I think.
This is incorrect. “Should” and “prefer” can’t give different answers for yourself, unless you really muddle the entire issue of morality altogether. Hopefully we can all agree that there is no such thing as an objective morality written down on the grand Morality Rock (and even if there were there would be no reason to actually follow it or call it moral). If we can’t then let me know and I’ll defend that rather than the rest of this post.
The important question is; what the hell do we mean by “morality?” It’s not something we can find written down somewhere on one of Jupiter’s moons, so what exactly is it, where does it come from, and most importantly where do our intuitions and knowledge about it come from? The answer that seems most useful is that morality is the algorithm we want to use to determine what actions to take, if we could self-modify to be the kind of people we want to be. It comes from reflecting on our preferences and values and deciding which we think are really and truly important and which we would rather do without. We can’t always do it perfectly right now, because we run on hostile hardware, but if we could reflect on all our choices perfectly then we would always choose the moral one. That seems to align with our intuitions of morality as the thing we wish we could do, even if we sometimes can’t or don’t due to akrasia or just lack of virtue. Thus, it is clear that there is a difference between what we “should” do, and what we “would” do (just as there is sometimes a difference between the best answer we can get for a math problem and the one we actually write down on the test). But it’s clear that there is no difference between what we “should” do and what we would prefer we do. Even if you think my definition of morality is missing something, it should be clear that morality cannot come from anywhere other than our preferences. There simply isn’t anywhere else we could get information about what we “should” do, which anyone in their right mind would not just ignore.
In short, if I would do x, and I prefer to do x, then why the heck would/should I care whether I should do x?! Morality in that case is completely meaningless; it’s no more useful than whatever’s written on the great Morality Rock. If I don’t prefer to act morally (according to whatever system is given) then I don’t care whether my action is “moral”.
You have a very specific, universal definition of morality, which does seem to meet some of our intuitions about the word but which is generally not at all useful outside of that. Specifically, for some reason when you say moral you mean unselfish. You mean what we would want to do if we, personally, we’re not involved. That captures some of our intuitions, but only does so insofar as that is a specific thing that sounds sort of good and that therefore tends to end up in a lot of moral systems. However, it is essentially a command from on high—thou shalt not place thine own interests above others. I, quite frankly, don’t care what you think I should or shouldn’t do. I like living. I value my life higher than yours, by a lot. I think that in general people should flip the switch on the trolley problem, because I am more likely to be one of the 5 saved than the 1 killed. I think that if I already know I am the one, they should not. I understand why they wouldn’t care, and would flip it anyway, but I would do everything in my power (including use of the Dark Arts, bribes, threats, and lies) to convince them not to. And then I would walk away feeling sad that 5 people died, but nonetheless happy to be alive. I wouldn’t say that my action was immoral; on reflection I’d still want to live.
The major sticking point, honestly, is that the concept of morality needs to be dissolved. It is a wrong question. The terms can be preserved, but I’m becoming more and more convinced that they shouldn’t be. There is no such thing as a moral action. There is no such thing as good or evil. There are only things that I want, and things that you want, and things that other agents want. Clippy the paperclip maximizer is not evil, but I would kill him anyway (unless I could use him somehow with a plan to kill him later). I would adopt a binding contract to kill myself to save 5 others on the condition that everyone else does the same; but if I already know that I would be in a position to follow through on it then I would not adopt it. I don’t think that somehow I “should” adopt it even though I don’t want to, I just don’t want to adopt it and should is irrelevant (it’s exactly the same operation, mentally, as “want to”).
Basically, you’re trying to establish some standard of behavior and call it moral. And you’re wrong. That’s not what moral means in any sense other than that you have defined it to mean that. Which you can’t do. You’ve gotten yourself highly confused in the process. Restate your whole point, but don’t use the words moral or should anywhere (or synonyms). What you should find is that there’s no longer any point to be made. “Moral” and “should” are buzzwords with no meaning, but they sound like they should be important so everyone keeps talking about them and throwing out nice-sounding things and calling them moral, and are contradicted by othe people with other nice things and calling them moral. Sometimes I think the fundamentalist theists have it better figured out; “moral” is what God says it is, and you care because otherwise you’re thrown into fire!
By the important question, I meant the important question with regard to the problem at hand. Ultimately I’ve since decided that the whole concept of morality is a sort of Wrong Question; discourse is vastly improved by eliminating the word altogether (and not replacing with a synonym).
What is the process which determines what you should do? What mental process do you perform to decide that you should or shouldn’t do x? When I try and pinpoint it I just keep finding myself using exactly the same thoughts as when I decide what I prefer to do. When I try to reflect back to my days as a Christian, I recall checking against a set of general rules of good and bad and determine where something lies on that spectrum. Should can mean something different from want in the sense of “according to the Christian Bible, you should use any means necessary to bring others to believe in Christ even if that hurts you.” But when talking about yourself? What’s the rule set you’re comparing to? I want to default to comparing to your preferences. If you don’t do that then you need to be a lot more specific about what you mean by “should”, and indeed why the word is useful at all in that context.
It seems like a rather different statement to say that there exists a mechanism on our brain which tends to make us want to act as though we had no stakes in the situation, as opposed to talking about what is moral. I’m no evo-psych specialist but it seems plausible that such a mechanism exists. I dispute the notion that such a mechanism encompasses what is usually meant by morality. Most moral systems do not resolve to simply satisfying that mechanism. Also, I see no reason to label that particular mechanism “moral”, nor the output of it those things we “should” do (I don’t just disagree with this on reflection; it’s actually my inuition that “should” means what you want to do, while impartiality is a disconnected preference that I recognize but don’t associate even a little bit with should. I don’t seem to have an intuition about what morality means other than doing what you should, but then I get a little jarring sensation from the contact with my should intuition...). You’ve described something I agree with after the taboo, but which before it I definitely disagree with. It’s just an issue of semantics at this point, but semantics are also important. “Morality” has really huge connotations for us; it’s a bit disingenuous to pick one specific part of our preferences and call it “moral”, or what we “should” do (even if that’s the part of our brain that causes us to talk about morality, it’s not what we mean by morality). I mean, I ignore parts of my preferences all the time. A thousand shards of desire and all that. Acting impartially is somewhere in my preferences,, but it’s pretty effectively drowned out by everything else (and I would self-modify away from it given the option—it’s not worth giving anything up for on reflection, except as social customs dictate).
I can identify the mechanism you call moral outrage though. I experience (in my introspection of my self-simulation, so, you know, reliable data here /sarcasm) frustration that he would make a decision that would kill me for no reason (although it only just now occurred to me that he could be intentionally evil rather than stupid—that’s odd). I oddly experience a much stronger reaction imagining him being an idiot than imagining him directly trying to kill me. Maybe it’s a map from how my “should” algorithm is wired (you should do that which on reflection you want to do) onto the situation, which does make sense. I dislike the goals of the evil guy, but he’s following them as he should. The stupid one is failing to follow them correctly (and harming me in the process—I don’t get anywhere near as upset, although I do get some feeling from it, if he kills 5 to save me).
In short, using the word moral makes your point sound really different than when you don’t. I agree with it, mostly, without “moral” or “should”. I don’t think that most people mean anything close to what you’ve been using those words to mean, so I recommend some added clarity when talking about it. As to the Squareness Rock, “square” is a useful cocept regardless of how I learned it—and if it was a Harblan Rock that told me a Harblan was a rectangle with sides length 2:9, I wouldn’t care (unless there were special properties about Harblans). A Morality Rock only tells me some rules of behavior, which I don’t care about at all unless they line up with the preferences I already had. There is no such thing as morality, except in the way it’s encoded in individual human brains (if you want to call that morality, since I prefer simply calling it preferences); and your definition doesn’t even come close to the entirety of what is encoded in human brains.
Is it okay to slip into the streams of thought that the other considers logic in order to beat them at it and potentially shake their beliefs?
Basically, the question here is whether you can use the Dark Arts with purely Light intentions. In the ideal case, I have to say “of course you can”. Assuming that you know a method which you believe is more likely to cause your partner to gain true beliefs rather than false ones, you can use that method even if it involves techniques that are frowned upon in rationalist circles. However, in the real world, doing so is incredibly dangerous. First, you have to consider the knock-on effects of being seen to use such lines of reasoning; it could damage your reputation or that of rationalists in general for those that hear you, it could cause people to become more firm in a false epistemology which makes them more likely to just adopt another false belief, etc. You also have to consider that you run on hostile hardware; you could damage your own rationality if you aren’t very careful about handling the cognitive dissonance. There are a lot of failure modes you open yourself up to when you engage in that sort of anti-reasoning, and while it’s certainly possible to navigate through it unscathed (I suspect Eliezer has done so in his AI box experiments), I don’t think it is a good idea to expose yourself to the risk without a good reason.
An unrelated but also relevant point: everything is permissible, but not all things are good. Asking “is it okay to...” is the wrong question, and is likely to expose you to some of the failure modes of Traditional Rationality. You don’t automatically fail by phrasing it like that, but once again it’s an issue of unnecessarily risking mental contamination. The better question is “is it a good idea to...” or “what are the dangers of...” or something similar that voices what you really want answered, which should probably not be “will LWers look down at me for doing …” (After all, if something is a good idea but we look down at it then we want to be told so so that we can stop doing silly things like that.)
It seems to me that this paper is overly long and filled with unnecessary references, even with a view towards philosophers who don’t know anything from the field. It suffices to say that “bottom-up predictability” applied to the mind implies that we can build a machine to do the things which the mind does. The difficulty of doing so has a strict upper bound in the difficulty of building an organic brain from scratch, and is very probably easier than that (if any special physical properties are involved, they can very likely be duplicated by something much easier to build). Basically, if you accept that the brain is a physical system, then every argument you can produce about how physical systems can’t do what the brain does is necessarily wrong (although you might need something that isn’t a digital computer). Anything past that is an empirical technological issue which is not really in the realm of philosophy at all, but rather of computer scientists and physicists.
The sections on Godel’s theorem and hyper computation could be summed up in a quick couple of paragraphs which reference each in turn as examples of objections that physical systems can’t do what minds do, followed by the reminder that if you accept the mind as a physical system then clearly those objections can’t apply. It feels like you just keep saying the same things over and over in the paper; by the end I was wondering what the point was. Certainly I didn’t feel like your title tied into the paper very well at all, and there wasn’t a strong thesis that stood out to me (“Given that the brain is a physical system, and physics is consistent (the same laws govern machines we find in nature, like the human brain, and those we build ourselves), then it must be possible in principle to build machines which can do any and all jobs that a human can do.”) My proposed thesis is stronger than yours, mostly because machines have already taken many jobs (factory assembly lines) and in fact machines are already able to perform mathematical proofs (look up Computer Assisted Proofs -they can solve problems that humans can’t due to speed requirements). I also use “machines” instead of AI in order to avoid questions of intelligence or the like—the Chinese Room might produce great works of literature, even if it is believed not to be intelligent, and that literature could be worth money.
Don’t take this as an attack or anything, but rather as criticism that you can use to strengthen your paper. There’s a good point here, I just think it needs be brought out and given the spotlight. The basic point is not complex, and the only thing you need in order to support it is an argument that the laws of physics don’t treat things we build differently just because we built them (there’s probably some literature here for an Appeal to Authority if you feel you need one; otherwise a simple argument from Occam’s Razor plus the failure to observe this being the case in anything so far is sufficient). You might want an argument for machines that are not physically identical to humans, but you’ll lose some of your non-reductionist audience (maybe hyper computation is possible for humans but nothing else). Such an argument can be achieved through Turing-complete simulation, or in the case of hyper computation the observation that it should be probably possible to build something that isn’t a brain but uses the same special physics.
Warning: I am not a philosophy student and haven’t the slightest clue what any of your terms mean. That said, I can still answer your questions.
1) Occam’s Razor to the rescue! If you distribute your priors according to complexity and update on evidence using Bayes’ Theorum, then you’re entirely done. There’s nothing else you can do. Sure, if you’re unlucky then you’ll get very wrong beliefs, but what are the odds of a demon messing with your observations? Pretty low, compared to the much simpler explanation that what you think you see correlates well to the world around you. One and zero are not probabilities; you are never certain of anything, even those things you’re probably getting used to calling a priori truths. Learn to abandon your intuitions about certainty; even if you could be certain of something, our default intuitions will lead us to make bad bets when certainty is involved, so there’s nothing there worth holding on to. In any case, the right answer is understanding that beliefs are always always always uncertain. I’m pretty sure that 2 + 2 = 4, but I could be convinced otherwise by an overwhelming mountain of evidence.
2) I don’t know what question is being asked here, but if it has no possible impact on the real world then you can’t decide if it’s true or false. Look at Bayes’ Theorem; if probability (evidence given statement) is equal to probability (evidence) then your final belief is the same as your prior. If there is in principle no excitement you could run which would give you evidence for or against it, then the question is not really a question; knowing it was true or false would tell you nothing about which possible world you live in; it would not let you update your map. It is not merely useless but fundamentally not in the same class of statements as things like “are apples yellow?” or “should machines have legal rights, given “should” referring to generalized human preferences?” If there is an experiment you could run in principle, and knowing whether the statement is true or false would tell you something, then you simply have to refer to Occam’s Razor to find your prior. You won’t necessarily get an answer that’s firmly one way or another, but you might.
3) I’ll admit I had to look this up to give an answer. What I found was that there is literally not a question here. Go read A Human’s Guide to Words (sequence on LW) to understand why, although I’ll give a brief explanation. “Knowledge”, the word, is not a fundamental thing. Nowhere is there inscribed on the Almighty Rock of Knowledge that “knowledge” means “justified true belief” or “correctly assigned >90% certainty” or “things the Flying Spaghetti Monster told you.” It only has meaning as a symbol that we humans can use to communicate. If I made it clear that I was going to use the phrase “know x” to mean “ate x for breakfast”, and then said “I know a chicken biscuit”, I would be commiting an error; but that error would have nothing to do with the true meaning of “know”. When I say “I know that the earth is not flat”, I mean that I have seen pretty strong evidence that the earth really isn’t flat, such that for it to be flat would require a severe mental break on my part or other similarly unlikely circumstances. I don’t know it with certainty; I don’t know anything with certainty. But that’s not what “know” means in the minds of most people I speak with, so I can say “I know the world is not flat” and everyone around me gets the right idea. There is no such thing as a correct attribution of knowledge, nor an incorrect one, because knowledge is not a fundamental thing nor sharply defined, but instead it’s a fuzzy shape in conceptspace which corresponds to some human intuitions about the world but not to the actual territory. Humans are biased towards concrete true/false dichotomies, but that’s not how the real world works. Once you realize that beliefs are probabilities you’ll realize how incredibly silly most philosophical discussions of knowledge are.
My quick advice to you in general (so that you can solve future problems like this on your own) is three-fold. First, learn Bayes and keep it close to you at all times. The Twelve Virtues of Rationality are nice for a way to remind yourself what it means to want to actually get the right answer. Second, read A Human’s Guide to Words, and in particular play Rationalist Taboo constantly. Play it with yourself before you speak and with others when they use words like “knowledge” or “free will”. Do not simply accept a vague intuition; play it until you’re certain of what you mean (and it matches what you meant when you first said it), or certain that you have no idea. Pro tip: free will sounds like a pretty simple concept, but you have no idea how to specify it other than that thing that you can feel you have. (And any other specification fails to capture what you or anybody else really want to talk about). Third, and I’m sure some people will disagree here, but… Get the heck out of philosophy. There is almost nothing of value that you’ll get from the field. Almost all of it is trash, because there really aren’t enough interesting questions that don’t require you to actually go out and do /gasp/ science to justify an entire field. Pretty much all the important ones have answers already, although you wouldn’t know that by talking to philosophers. Philosophy was worthwhile in Ancient Greece when “philosopher” meant “aspiring rationalist” and human knowledge was at the stage of gods controlling everything, but in the modern day we already have the basic rationalist’s toolkit available for mass consumption. Any serious advance made in the Art will come from needing it to do something that the Art you were taught couldn’t do for you, and such advances are what philosophy should be, but isn’t, providing. You won’t find need of a new rationalist Art if you’re trying to convince other people, who by definition do not already have this new Art, of some position that you stumbled upon because of other people who argued it convincingly to you. If you care about the human state of knowledge, go into any scientific discipline. Otherwise just pick literally anything else. There’s nothing for you in philosophy except for a whole lot of confused words.
Not even close. Imagine if, instead of charities, these are colored balls. And instead of altruistic benefit, you’re getting paid (or money gets sent to your charity of choice). Say I gave you $10, and you get a return of 200:1 on any money placed on the ball that comes out. How do you distribute your money? Any distribution other than all on the most likely loses out.
Alternatively, imagine colored cards in a deck. You guess what color comes next, and you get $10 for every correct guess. What do you guess, assuming cards are replaced every time? In a hundred guesses, do you change your guess 10 times? If you do you’ll lose out.
Hi! I’ve been lurking here for maybe 6 months, and I wanted to finally step out and say hello, and thank you! This site has helped to shape huge parts of my worldview for the better and improved my life in general to boot. I just want to make a list of a few of the things I’ve learned since coming here which I never would have otherwise, as nearly as I can tell.
I’ve dropped the frankly silly beliefs I held as an evangelical Christian; I wasn’t as bad as most in that category but in hindsight that was just due to luck and strong logical skills. (I knew better than to assert that everyone should [know that they should] believe, but nonetheless I chose to follow a harmful “morality”)
I’ve learned how to argue effectively and identify real disagreements as opposed to simple definitional disputes, or asking the wrong question. I’ve used this to resolve a long-standing (think years) dispute with my cousin about the application of the word “literally” as it relates to hyperbole
I’ve realized that intelligence isn’t just a fun party trick; I can use it directly to improve my life. Instrumental rationality was something that just never crossed my mind before coming here; intellect made me a good programmer but it sucked that I couldn’t get girls. Now I’ve been actively dieting and getting exercise just because I suddenly realized that I can actually improve my life, if I try.
In a similar vein, akrasia is a thing I can fight, and a thing I can fight smart. If just jumping in and doing doesn’t work, I have options.
Cryonics exists, like, right now. I can go out and buy immortality (or at least a decent chance of a really long life). That’s a huge deal to me.
There are other people who think the same way I do. I always had trouble finding any combination of intelligence and epistemically rationality, plus the desire to talk about relevant topics using those skills. I knew, realistically, that I couldn’t be that exceptional, but I had trouble finding evidence to disprove it (not that I looked that well, mind you).
Polyamory (sp?) is a real thing that real people do, not just a cool idea from a story. I haven’t made any use of the observation yet, but it tends to mesh well with many of my intuitions about romance.
Did I really just almost forget the basic premise of this site? I’ve become, in general, less wrong (epistemic rationality). Quantum Mechanics are awesome!
Probably a bunch of stuff that isn’t coming to mind at the moment.
Anyway, for all of that and more, thanks! This site has influenced me more than anything or anyone else ever has. It’s really difficult to describe what it feels like to be less wrong and know exactly how and why, but I guess you guys probably know anyway.
And a few questions. First, I noticed that there’s a meetup in Austin but not in the (much larger) Houston area. Is this just a lack of members in the area (this is the Bible Belt after all) or just because no one’s tried to start one? Second, and there may be a thread already devoted to this somewhere, but what are some good math or computer science books I should look for? I already know the basics of calculus and I can throw my own solutions together for most harder problems but I’d like to get a stronger understanding of higher level math and computer algorithms to use it. And third, are there any other websites/ blogs (besides OB) that have a similar tone/community to this one, though perhaps on different topics, which anyone would recommend?