arundelo
Robert Morris has a very unusual quality: he’s never wrong. It might seem this would require you to be omniscient, but actually it’s surprisingly easy. Don’t say anything unless you’re fairly sure of it. If you’re not omniscient, you just don’t end up saying much.
[....] He’s not just generally correct, but also correct about how correct he is.
-- Paul Graham
- 1 Nov 2013 16:17 UTC; 14 points) 's comment on Rationality Quotes November 2013 by (
- 7 Feb 2012 19:56 UTC; 1 point) 's comment on Rationality Quotes February 2012 by (
Such people have no problem with the idea of magic, because everything is magic to them, even science.
Years ago I and three other people were training for a tech support job. Our trainer was explaining something (the tracert command) but I didn’t understand it because his explanation didn’t seem to make sense. After asking him more questions about it, I realized from his contradictory answers that he didn’t understand it either. The reason I mention this is that my three fellow trainees had no problem with his explanation, one even explicitly saying that she thought it made perfect sense.
“Help, I’m trapped in an autonomic nervous system!”
Here’s one I say a lot: “Everyone is too young to die.”
Problem is, “Fucking up when presented with surprising new situations” is actually a chronic human behavior. It’s why purse snatchers are so effective—by the time someone registers Wait, did somebody just yank my purse off my shoulder?, the snatcher is long gone.
Eventually you just have to admit that if it looks like the absence of a duck, walks like the absence of a duck, and quacks like the absence of a duck, the duck is probably absent.
That’s why I’m skeptical of people who look at some catastrophic failure of a complex system and say, “Wow, the odds of this happening are astronomical. Five different safety systems had to fail simultaneously!” What they don’t realize is that one or two of those systems are failing all the time, and it’s up to the other three systems to prevent the failure from turning into a disaster.
-- Raymond Chen
To properly understand how traceroute works one would need to know about the TTL field
I did learn about this on my own that day, but the original confusion was at a quite different level: I asked whether the times on each line measured the distance between that router and the previous one, or between that router and the source. His answer: “Both.” A charitable interpretation of this would be “They measure round trip times between the source and that router, but it’s just a matter of arithmetic to use those to estimate round trip times between any two routers in the list”—but I asked him if this was what he meant and he said no. We went back and forth for a while until he told me to just research it myself.
Edit: I think I remember him saying something like “You’re expecting it to be logical, but things aren’t always logical”.
Specifically, [these recent books that deal with parallel universes] argue that if some scientific theory X has enough experimental support for us to take it seriously, then we must take seriously also all its predictions Y, even if these predictions are themselves untestable (involving parallel universes, for example).
As a warm-up example, let’s consider Einstein’s theory of General Relativity. It’s widely considered a scientific theory worthy of taking seriously, because it has made countless correct predictions—from the gravitational bending of light to the time dilation measured by our GPS phones. This means that we must also take seriously its prediction for what happens inside black holes, even though this is something we can never observe and report on in Scientific American. If someone doesn’t like these black hole predictions, they can’t simply opt out of them and dismiss them as unscientific: instead, they need to come up with a different mathematical theory that matches every single successful prediction that general relativity has made—yet doesn’t give the disagreeable black hole predictions.
-- Max Tegmark, Scientific American guest blog, 2014-02-04
Things like linear algebra, group theory, and probability have so many uses throughout science that learning them is like installing a firmware upgrade to your brain—and even the math you don’t use will stretch you in helpful ways.
NPR show All Things Considered on the Singularity and SIAI
[I]n any system that is less than 100% perfect, some effort ends up being spent on checking things that, retrospectively, turned out to be ok.
People’s belief in something is evidence for that thing in the sense that in general it’s more likely for people to believe in a thing if it’s true. Less Wrongers sometimes use the phrase “Bayesian evidence” when they want to explicitly include this type of evidence that is excluded by other standards of evidence.
One way to think about this: Imagine that there are a bunch of parallel universes, some of which have a flat Earth and some of which have a spherical Earth, and you don’t know which type of universe you’re in. If you look around and see that a bunch of people believe the Earth is flat, you should judge it as more likely you’re in a flat-Earth universe than if you looked around and saw few or no flat-Earthers.
However, people’s beliefs are often weak evidence that can be outweighed by other evidence. The fact that many people believe in a god is evidence that there is a god, but (I think) it’s outweighed by other evidence that there is not a god.
See also “Argument Screens off Authority”.
You argue that it would be wrong to stab my neighbor and take all their stuff. I reply that you have an ugly face. I commit the “ad hominem” fallacy because I’m attacking you, not your argument. So one thing you could do is yell “OI, AD HOMINEM, NOT COOL.”
[...] What you need to do is go one step more and say “the ugliness of my face has no bearing on moral judgments about whether it is okay to stab your neighbor.”
But notice you could’ve just said that without yelling “ad hominem” first! In fact, that’s how all fallacies work. If someone has actually committed a fallacy, you can just point out their mistake directly without being a pedant and finding a pat little name for all of their logical reasoning problems.
This thread needs a mention of this saying: “Correlation correlates with causation because causation causes correlation.” (I don’t know if anyone knows who came up with this.)
Steven and most of the people here (including me) do indeed believe that “you are your brain” in the sense that the mind is something that the brain does. But Steven’s epigram is using “you” in a narrower sense, referring to just the conscious, internal-monologue part of the mind.
In the fable of the fox and the grapes, it’s the fox’s brain that is the proximate cause of him giving up the attempt to get the grapes, but it’s the “creepy vizier” part of his mind that makes up the “I didn’t want them anyway” story.
(Edit: I should have said “most of the other people here” in my first sentence. In case you didn’t know it, Steven Kaas is an LWer. He is kind enough to let me and others earn tons of karma by quoting his Twitter bons mots.)
On some pitch black mornings, hearing what I knew was a cold wind howling outside, I might think, “Well, it is certainly comfortable in this bed, and maybe it wouldn’t hurt if I just skipped practicing to-day.” But my response to this was not to draw on something called will power, to insult or threaten myself, but to take a longer look at my life, to extend my vision, to think about the whole of my experience, to reconnect present and future, and quite specifically, to ask myself, “Do you like playing the cello or not? Would you like to play it better or not?” When I put the matter this way I could see that I enjoyed playing the cello more than I enjoyed staying in bed. So I got up. If, as sometimes happened or happens, I do stay in bed, not sleeping, not really thinking, but just not getting up, it is not because will power is weak but because I have temporarily become disconnected, so to speak, from the wholeness of my life. I am living in that Now that some people pursue so frantically, that gets harder to find the harder we look for it.
John Holt, Freedom and Beyond, p. 119
See also this comment by Z_M_Davis.
- 29 Jun 2013 1:09 UTC; 3 points) 's comment on Bad Concepts Repository by (
- 3 Jul 2009 1:40 UTC; 2 points) 's comment on Rationality Quotes—July 2009 by (
Hacker School has a set of “social rules [...] designed to curtail specific behavior we’ve found to be destructive to a supportive, productive, and fun learning environment.” One of them is “no feigning surprise”:
The first rule means you shouldn’t act surprised when people say they don’t know something. This applies to both technical things (“What?! I can’t believe you don’t know what the stack is!”) and non-technical things (“You don’t know who RMS is?!”). Feigning surprise has absolutely no social or educational benefit: When people feign surprise, it’s usually to make them feel better about themselves and others feel worse. And even when that’s not the intention, it’s almost always the effect. As you’ve probably already guessed, this rule is tightly coupled to our belief in the importance of people feeling comfortable saying “I don’t know” and “I don’t understand.”
I think this is a good rule and when I find out someone doesn’t know something that I think they “should” already know, I instead try to react as in xkcd 1053 (or by chalking it up to a momentary maladaptive brain activity change on their part, or by admitting that it’s probably not that important that they know this thing). But I think “feigning surprise” is a bad name, because when I’m in this situation, I’m never pretending to be surprised in order to demonstrate how smart I am, I am always genuinely surprised. (Surprise means my model of the world is about to get better. Yay!)
Numerical arithmetic should look to children like a simpler and faster way of doing things that they know how to do already, not a set of mysterious recipes for getting right answers to meaningless questions.
John Holt, How Children Fail, p. 101
See also Paul Lockhart.
- 3 Jul 2009 1:40 UTC; 2 points) 's comment on Rationality Quotes—July 2009 by (
- 5 Jul 2012 1:59 UTC; 0 points) 's comment on [Link] Why the kids don’t know no algebra by (
-- Steven Kaas