Edited to add: in the original post, I intended but forgot to emphasize that I think the correlation between IQ and rationality is weak at best. Do people disagree? Does anyone want to go out on a limb and say, “They aren’t the same thing, but the correlation is still very strong?”
I’ll go ahead and disagree with this. Sure, there’s a lot of smart people who aren’t rational, but then I would say that rationality is less common than intelligence. On the other hand, all the rational people I’ve met are very smart. So it seems really high intelligence is a necessary but not a sufficient condition. Or as Draco Malfoy would put it: “Not all Slytherins are Dark Wizards, but all Dark Wizards are from Slytherin.”
I largely agree with the rest of your post Chris (upvoted), though I’m not convinced that the self-congratulatory part is Less Wrong’s biggest problem. Really, it seems to me that a lot of people on Less Wrong just don’t get rationality. They go through all the motions and use all of the jargon, but don’t actually pay attention to the evidence. I frequently find myself wanting to yell “stop coming up with clever arguments and pay attention to reality!” at the screen. A large part of me worries that rationality really can’t be taught; that if you can’t figure out the stuff on Less Wrong by yourself, there’s no point in reading about it. Or, maybe there’s a selection effect and people who post more comments tend to be less rational than those who lurk?
A large part of me worries that rationality really can’t be taught; that if you can’t figure out the stuff on Less Wrong by yourself, there’s no point in reading about it.
The teaching calls to what is within the pupil. To borrow a thought from Georg Christoph Lichtenberg, if an ass looks into LessWrong, it will not see a sage looking back.
I have a number of books of mathematics on my shelves. In principle, I could work out what is in them, but in practice, to do so I would have to be of the calibre of a multiple Field and Nobel medallist, and exercise that ability for multiple lifetimes. Yet I can profitably read them, understand them, and use that knowledge; but that does still require at least a certain level of ability and previous learning.
Or to put that another way, learning is in P, figuring out by yourself is in NP.
Agreed. I’m currently under the impression that most people cannot become rationalists even with training, but training those who do have the potential increases the chance that they will succeed. Still I think rationality cannot be taught like you might teach a university degree: A large part of it is inspiration, curiosity, hard work and wanting to become stronger. And it has to click. Just sitting in the classroom and listening to the lecturer is not enough.
Actually now that I think about it, just sitting in the classroom and listening to the lecturer for my economics degree wasn’t nearly enough to gain a proper understanding either, yet that’s all that most people did (aside from a cursory reading of the books of course). So maybe the problem is not limited to rationality but more about becoming really proficient at something in general.
Reading something and understanding/implementing it are not quite the same thing. It takes clock time and real effort to change your behavior.
I do not think it is unexpected that a large portion of the population on a site dedicated to writing, teaching, and discussing the skills of rationality is going to be, you know, still very early in the learning, and that some people will have failed to grasp a lesson they think they have grasped, and that others will think others have failed to grasp a lesson that they have failed to grasp, and that you will have people who just like to watch stuff burn.
I’m sure it’s been asked elsewhere, and I liked the estimation questions on the 2013 survey; has there been a more concerted effort to see what being an experienced LWer translates to, in terms of performance on various tasks that, in theory, people using this site are trying to get better at?
Yes, you hit the nail on the head. Rationality takes hard work and lots of practice, and too often people on Less Wrong just spend time making clever arguments instead of doing the actual work of asking what the actual answer is to the actual question. It makes me wonder whether Less Wrongers care more about being seen as clever than they care about being rational.
As far as I know there’s been no attempt to make a rationality/Bayesian reasoning test, which I think is a great pity because I definitely think that something like that could help with the above problem.
There are many calibration tests you can take (there are many articles on this site with links to see if you are over-or-underconfident on various subject tests—search for calibration).
What I don’t know is if there has been some effort to do this across many questions, and compile the results anonymously for LWers.
I caution against jumping quickly to conclusions about “signalling”. Frankly, I suspect you are wrong, and that most of the people here are in fact trying. Some might not be, and are merely looking for sparring matches. Those people are still learning things (albeit perhaps with less efficiency).
As far as “seeming clever”, perhaps as a community it makes sense to advocate people take reasoning tests which do not strongly correlate with IQ, and that people generally do quite poorly on (I’m sure someone has a list, though it may be a relatively short list of tasks), which might have the effect of helping people to see stupid as part of the human condition, and not merely a feature of “non-high-IQ” humans.
Fair enough, that was a bit too cynical/negative. I agree that people here are trying to be rational, but you have to remember that signalling does not need to be on purpose. I definitely detect a strong impulse amongst the less wrong crowd to veer towards controversial and absurd topics rather than the practical and to make use of meta level thinking and complex abstract arguments instead of simple and solid reasoning. It may not feel that way from the inside, but from the outside point of view it does kind of look like Less Wrong is optimizing for being clever and controversial rather than rational.
I definitely say yes to (bayesian) reasoning tests. Someone who is not me needs to go do this right now.
I don’t know that there is anything to do, or that should be done, about that outside-view problem. Understanding why people think you’re being elitist or crazy doesn’t necessarily help you avoid the label.
Huh? If the outside view tells you that there’s something wrong, then the problem is not with the outside view but with the thing itself. It has nothing to do with labels or inferential distance. The outside-view is a rationalist technique used for viewing a matter you’re personally involved in objectively by taking a step back. I’m saying that when you take a step back and look at things objectively, it looks like Less Wrong spends more time and effort on being clever than on being rational.
But now that you’ve brought it up, I’d also like to add that the habit on Less Wrong to assume that any criticism or disagreement must be because of inferential distance (really just a euphemism for saying the other guy is clueless) is an extremely bad one.
The outside view isn’t magic. Finding the right reference class to step back into, in particular, can be tricky, and the experiments the technique is drawn from deal almost exclusively with time forecasting; it’s hard to say how well it generalizes outside that domain.
Okay, you’re doing precisely the thing I hate and which I am criticizing about Less Wrong. Allow me to illustrate:
LW1: Guys, it seems to me that Less Wrong is not very rational. What do you think? LW2: What makes you think Less Wrong isn’t rational? LW1: Well if you take a step back and use the outside view, Less Wrong seems to be optimizing for being clever rather than optimizing for being rational. That’s a pretty decent indicator. LW3: Well, the outside view has theoretical limitations, you know. Eliezer wrote a post about how it is possible to misuse the outside point of view as a conversation stopper. LW1: Uh, well unless I actually made a mistake in applying the outside view I don’t see why that’s relevant? And if I did make a mistake in applying it it would be more helpful to say what it was I specifically did wrong in my inference. LW4: You are misusing the term inference! Here, someone wrote a post about this at some point. LW5: Yea but that post has theoretical limitations. LW1: I don’t care about any of that, I want to know whether or not Less Wrong is succeeding at being rational. Stop making needlessly theoretical abstract arguments and talk about the actual thing we were actually talking about. LW6: I agree, people here use LW jargon as as a form of applause light! LW1: Uh... LW7: You know, accusing others of using applause lights is a fully generalized counter argument! LW6: Oh yea? Well fully generalized counter arguments are fully generalized counter arguments themselves, so there!
We’re only at LW3 right now so maybe this conversation can still be saved from becoming typical Less Wrong-style meta screwery. Or to make my point more politely: Please tell me whether or not you think Less Wrong is rational and whether or not something should be done, because that’s the thing we’re actually talking about.
Dude, my post was precisely about how you’re making a mistake in applying the outside view. Was I being too vague, too referential? Okay, here’s the long version, stripped of jargon because I’m cool like that.
The point of the planning fallacy experiments is that we’re bad at estimating the time we’re going to spend on stuff, mainly because we tend to ignore time sinks that aren’t explicitly part of our model. My boss asks me how long I’m going to spend on a task: I can either look at all the subtasks involved and add up the time they’ll take (the inside view), or I can look at similar tasks I’ve done in the past and report how long they took me (the outside view). The latter is going to be larger, and it’s usually going to be more accurate.
That’s a pretty powerful practical rationality technique, but its domain is limited. We have no idea how far it generalizes, because no one (as far as I know) has rigorously tried to generalize it to things that don’t have to do with time estimation. Using the outside view in its LW-jargon sense, to describe any old thing, therefore is almost completely meaningless; it’s equivalent to saying “this looks to me like a $SCENARIO1”. As long as there also exists a $SCENARIO2, invoking the outside view gives us no way to distinguish between them. Underfitting is a problem. Overfitting is also a problem. Which one’s going to be more of a problem in a particular reference class? There are ways of figuring that out, like Yvain’s centrality heuristic, but crying “outside view” is not one of them.
As to whether LW is rational, I got bored of that kind of hand-wringing years ago. If all you’re really looking for is an up/down vote on that, I suggest a poll, which I will probably ignore because it’s a boring question.
Ok, I guess I could have inferred your meaning from your original post, so sorry if my reply was too snarky. But seriously, if that’s your point I would have just made it like this:
“Dude you’re only supposed to use the phrase ‘outside view’ with regards to the planning fallacy, because we don’t know if the technique generalizes well.”
And then I’d go back and change “take a step back and look at it from the outside view” into “take a step back and look at it from an objective point of view” to prevent confusion, and upvote you for taking the time to correct my usage of the phrase.
My guess is that the site is “probably helping people who are trying to improve”, because I would expect some of the materials here to help. I have certainly found a number of materials useful.
But a personal judgement probably helping” isn’t the kind of thing you’d want. It’d be much better to find some way to measure the size of the effect. Not tracking your progress is a bad, bad sign.
My apologies, I thought you we referring to how people who do not use this site perceive people using the site, which seemed more likely to be what you were trying to communicate than the alternative.
Yes, the site viewed as a machine does not look like a well-designed rational-people-factory to me, either, unless I’ve missed the part where it’s comparing its output to its input to see how it is performing. People do, however, note cognitive biases and what efforts to work against them have produced, from time to time, and there are other signs that seem consistent with a well-intentioned rational-people-factory.
And, no, not every criticism does. I can only speak for myself, and acknowledge that I have a number of times in the past failed to understand what someone was saying and assumed they were being dumb or somewhat crazy as a result. I sincerely doubt that’s a unique experience.
On the other hand, all the rational people I’ve met are very smart.
Surely you know people of average intelligence who consistently show “common sense” (so rare it’s pretty much a superpower). They may not be super-smart, but they’re sure as heck not dumb.
Common sense does seem like a superpower sometimes, but that’s not a real explanation. I think that what we call common sense is mostly just the result of clear thinking and having a distaste for nonsense. If you favour reality over fancies, you are more likely to pay more attention to reality --> better mental habits --> stronger intuition = common sense.
But to answer your question, yes I do know people like that and I do respect them for it (though they still have above average intelligence, mostly). However, I would not trust them with making decisions on anything counter-intuitive like economics, unless they’re also really good at knowing what experts to listen to.
However, I would not trust them with making decisions on anything counter-intuitive like economics, unless they’re also really good at knowing what experts to listen to.
Yeah, but I’d say that about the smart people too.
Related, just seen today: The curse of smart people. SPOILER: “an ability to convincingly rationalize nearly anything.”
Related, just seen today: The curse of smart people. SPOILER: “an ability to convincingly rationalize nearly anything.”
The AI box experiment seems to support this. People who have been persuaded that it would be irrational to let an unfriendly AI out of the box are being persuaded to let it out of the box.
The ability of smarter or more knowledgeable people to convince less intelligent or less educated people of falsehoods (e.g. parents and children) shows that we need to put less weight on arguments and more weight on falsifiability.
I wouldn’t use the Ai box experiment as an example for anything because it is specifically designed to be a black box: It’s exciting precisely because the outcome confuses the heck out of people. I’m having trouble parsing this in Bayesian terms but I think you’re committing a rationalist sin by using an event that your model of reality couldn’t predict in advance as evidence that your model of reality is correct.
I strongly agree that we need to put less weight on arguments but I think falsifiability is impractical in everyday situations.
S1) Most smart people aren’t rational but most rational people are smart D1) There are people of average intelligence with common sense S2) Yes they have good intuition but you cannot trust them with counter-intuitive subjects (people with average intelligence are not rational) D2) You can’t trust smart people with counter-intuitive subjects either (smart people aren’t rational)
D2) does not contradict S1 because “most smart people aren’t rational” isn’t the same as “most rational people aren’t smart”, which is of course the main point of S1).
Interesting article, it confirms my personal experiences in corporations. However, I think the real problem is deeper than smart people being able to rationalize anything. The real problem is that overconfidence and rationalizing your actions makes becoming a powerful decision-maker easier. The mistakes they make due to irrationality don’t catch up with them until after the damage is done, and then the next overconfident guy gets selected.
I’ll go ahead and disagree with this. Sure, there’s a lot of smart people who aren’t rational, but then I would say that rationality is less common than intelligence. On the other hand, all the rational people I’ve met are very smart. So it seems really high intelligence is a necessary but not a sufficient condition. Or as Draco Malfoy would put it: “Not all Slytherins are Dark Wizards, but all Dark Wizards are from Slytherin.”
I largely agree with the rest of your post Chris (upvoted), though I’m not convinced that the self-congratulatory part is Less Wrong’s biggest problem. Really, it seems to me that a lot of people on Less Wrong just don’t get rationality. They go through all the motions and use all of the jargon, but don’t actually pay attention to the evidence. I frequently find myself wanting to yell “stop coming up with clever arguments and pay attention to reality!” at the screen. A large part of me worries that rationality really can’t be taught; that if you can’t figure out the stuff on Less Wrong by yourself, there’s no point in reading about it. Or, maybe there’s a selection effect and people who post more comments tend to be less rational than those who lurk?
The teaching calls to what is within the pupil. To borrow a thought from Georg Christoph Lichtenberg, if an ass looks into LessWrong, it will not see a sage looking back.
I have a number of books of mathematics on my shelves. In principle, I could work out what is in them, but in practice, to do so I would have to be of the calibre of a multiple Field and Nobel medallist, and exercise that ability for multiple lifetimes. Yet I can profitably read them, understand them, and use that knowledge; but that does still require at least a certain level of ability and previous learning.
Or to put that another way, learning is in P, figuring out by yourself is in NP.
Agreed. I’m currently under the impression that most people cannot become rationalists even with training, but training those who do have the potential increases the chance that they will succeed. Still I think rationality cannot be taught like you might teach a university degree: A large part of it is inspiration, curiosity, hard work and wanting to become stronger. And it has to click. Just sitting in the classroom and listening to the lecturer is not enough.
Actually now that I think about it, just sitting in the classroom and listening to the lecturer for my economics degree wasn’t nearly enough to gain a proper understanding either, yet that’s all that most people did (aside from a cursory reading of the books of course). So maybe the problem is not limited to rationality but more about becoming really proficient at something in general.
Reading something and understanding/implementing it are not quite the same thing. It takes clock time and real effort to change your behavior.
I do not think it is unexpected that a large portion of the population on a site dedicated to writing, teaching, and discussing the skills of rationality is going to be, you know, still very early in the learning, and that some people will have failed to grasp a lesson they think they have grasped, and that others will think others have failed to grasp a lesson that they have failed to grasp, and that you will have people who just like to watch stuff burn.
I’m sure it’s been asked elsewhere, and I liked the estimation questions on the 2013 survey; has there been a more concerted effort to see what being an experienced LWer translates to, in terms of performance on various tasks that, in theory, people using this site are trying to get better at?
Yes, you hit the nail on the head. Rationality takes hard work and lots of practice, and too often people on Less Wrong just spend time making clever arguments instead of doing the actual work of asking what the actual answer is to the actual question. It makes me wonder whether Less Wrongers care more about being seen as clever than they care about being rational.
As far as I know there’s been no attempt to make a rationality/Bayesian reasoning test, which I think is a great pity because I definitely think that something like that could help with the above problem.
There are many calibration tests you can take (there are many articles on this site with links to see if you are over-or-underconfident on various subject tests—search for calibration).
What I don’t know is if there has been some effort to do this across many questions, and compile the results anonymously for LWers.
I caution against jumping quickly to conclusions about “signalling”. Frankly, I suspect you are wrong, and that most of the people here are in fact trying. Some might not be, and are merely looking for sparring matches. Those people are still learning things (albeit perhaps with less efficiency).
As far as “seeming clever”, perhaps as a community it makes sense to advocate people take reasoning tests which do not strongly correlate with IQ, and that people generally do quite poorly on (I’m sure someone has a list, though it may be a relatively short list of tasks), which might have the effect of helping people to see stupid as part of the human condition, and not merely a feature of “non-high-IQ” humans.
Fair enough, that was a bit too cynical/negative. I agree that people here are trying to be rational, but you have to remember that signalling does not need to be on purpose. I definitely detect a strong impulse amongst the less wrong crowd to veer towards controversial and absurd topics rather than the practical and to make use of meta level thinking and complex abstract arguments instead of simple and solid reasoning. It may not feel that way from the inside, but from the outside point of view it does kind of look like Less Wrong is optimizing for being clever and controversial rather than rational.
I definitely say yes to (bayesian) reasoning tests. Someone who is not me needs to go do this right now.
I don’t know that there is anything to do, or that should be done, about that outside-view problem. Understanding why people think you’re being elitist or crazy doesn’t necessarily help you avoid the label.
http://lesswrong.com/lw/kg/expecting_short_inferential_distances/
Huh? If the outside view tells you that there’s something wrong, then the problem is not with the outside view but with the thing itself. It has nothing to do with labels or inferential distance. The outside-view is a rationalist technique used for viewing a matter you’re personally involved in objectively by taking a step back. I’m saying that when you take a step back and look at things objectively, it looks like Less Wrong spends more time and effort on being clever than on being rational.
But now that you’ve brought it up, I’d also like to add that the habit on Less Wrong to assume that any criticism or disagreement must be because of inferential distance (really just a euphemism for saying the other guy is clueless) is an extremely bad one.
The outside view isn’t magic. Finding the right reference class to step back into, in particular, can be tricky, and the experiments the technique is drawn from deal almost exclusively with time forecasting; it’s hard to say how well it generalizes outside that domain.
Don’t take this as quoting scripture, but this has been discussed before, in some detail.
Okay, you’re doing precisely the thing I hate and which I am criticizing about Less Wrong. Allow me to illustrate:
LW1: Guys, it seems to me that Less Wrong is not very rational. What do you think?
LW2: What makes you think Less Wrong isn’t rational?
LW1: Well if you take a step back and use the outside view, Less Wrong seems to be optimizing for being clever rather than optimizing for being rational. That’s a pretty decent indicator.
LW3: Well, the outside view has theoretical limitations, you know. Eliezer wrote a post about how it is possible to misuse the outside point of view as a conversation stopper.
LW1: Uh, well unless I actually made a mistake in applying the outside view I don’t see why that’s relevant? And if I did make a mistake in applying it it would be more helpful to say what it was I specifically did wrong in my inference.
LW4: You are misusing the term inference! Here, someone wrote a post about this at some point.
LW5: Yea but that post has theoretical limitations.
LW1: I don’t care about any of that, I want to know whether or not Less Wrong is succeeding at being rational. Stop making needlessly theoretical abstract arguments and talk about the actual thing we were actually talking about.
LW6: I agree, people here use LW jargon as as a form of applause light!
LW1: Uh...
LW7: You know, accusing others of using applause lights is a fully generalized counter argument!
LW6: Oh yea? Well fully generalized counter arguments are fully generalized counter arguments themselves, so there!
We’re only at LW3 right now so maybe this conversation can still be saved from becoming typical Less Wrong-style meta screwery. Or to make my point more politely: Please tell me whether or not you think Less Wrong is rational and whether or not something should be done, because that’s the thing we’re actually talking about.
Dude, my post was precisely about how you’re making a mistake in applying the outside view. Was I being too vague, too referential? Okay, here’s the long version, stripped of jargon because I’m cool like that.
The point of the planning fallacy experiments is that we’re bad at estimating the time we’re going to spend on stuff, mainly because we tend to ignore time sinks that aren’t explicitly part of our model. My boss asks me how long I’m going to spend on a task: I can either look at all the subtasks involved and add up the time they’ll take (the inside view), or I can look at similar tasks I’ve done in the past and report how long they took me (the outside view). The latter is going to be larger, and it’s usually going to be more accurate.
That’s a pretty powerful practical rationality technique, but its domain is limited. We have no idea how far it generalizes, because no one (as far as I know) has rigorously tried to generalize it to things that don’t have to do with time estimation. Using the outside view in its LW-jargon sense, to describe any old thing, therefore is almost completely meaningless; it’s equivalent to saying “this looks to me like a $SCENARIO1”. As long as there also exists a $SCENARIO2, invoking the outside view gives us no way to distinguish between them. Underfitting is a problem. Overfitting is also a problem. Which one’s going to be more of a problem in a particular reference class? There are ways of figuring that out, like Yvain’s centrality heuristic, but crying “outside view” is not one of them.
As to whether LW is rational, I got bored of that kind of hand-wringing years ago. If all you’re really looking for is an up/down vote on that, I suggest a poll, which I will probably ignore because it’s a boring question.
Ok, I guess I could have inferred your meaning from your original post, so sorry if my reply was too snarky. But seriously, if that’s your point I would have just made it like this:
“Dude you’re only supposed to use the phrase ‘outside view’ with regards to the planning fallacy, because we don’t know if the technique generalizes well.”
And then I’d go back and change “take a step back and look at it from the outside view” into “take a step back and look at it from an objective point of view” to prevent confusion, and upvote you for taking the time to correct my usage of the phrase.
My guess is that the site is “probably helping people who are trying to improve”, because I would expect some of the materials here to help. I have certainly found a number of materials useful.
But a personal judgement probably helping” isn’t the kind of thing you’d want. It’d be much better to find some way to measure the size of the effect. Not tracking your progress is a bad, bad sign.
LW8...rationality is more than one thing
My apologies, I thought you we referring to how people who do not use this site perceive people using the site, which seemed more likely to be what you were trying to communicate than the alternative.
Yes, the site viewed as a machine does not look like a well-designed rational-people-factory to me, either, unless I’ve missed the part where it’s comparing its output to its input to see how it is performing. People do, however, note cognitive biases and what efforts to work against them have produced, from time to time, and there are other signs that seem consistent with a well-intentioned rational-people-factory.
And, no, not every criticism does. I can only speak for myself, and acknowledge that I have a number of times in the past failed to understand what someone was saying and assumed they were being dumb or somewhat crazy as a result. I sincerely doubt that’s a unique experience.
http://lesswrong.com/lw/ec2/preventing_discussion_from_being_watered_down_by/, and other articles, I now read, because they are pertinent, and I want to know what sorts of work have been done to figure out how LW is perceived and why.
Surely you know people of average intelligence who consistently show “common sense” (so rare it’s pretty much a superpower). They may not be super-smart, but they’re sure as heck not dumb.
Common sense does seem like a superpower sometimes, but that’s not a real explanation. I think that what we call common sense is mostly just the result of clear thinking and having a distaste for nonsense. If you favour reality over fancies, you are more likely to pay more attention to reality --> better mental habits --> stronger intuition = common sense.
But to answer your question, yes I do know people like that and I do respect them for it (though they still have above average intelligence, mostly). However, I would not trust them with making decisions on anything counter-intuitive like economics, unless they’re also really good at knowing what experts to listen to.
Yeah, but I’d say that about the smart people too.
Related, just seen today: The curse of smart people. SPOILER: “an ability to convincingly rationalize nearly anything.”
The AI box experiment seems to support this. People who have been persuaded that it would be irrational to let an unfriendly AI out of the box are being persuaded to let it out of the box.
The ability of smarter or more knowledgeable people to convince less intelligent or less educated people of falsehoods (e.g. parents and children) shows that we need to put less weight on arguments and more weight on falsifiability.
I wouldn’t use the Ai box experiment as an example for anything because it is specifically designed to be a black box: It’s exciting precisely because the outcome confuses the heck out of people. I’m having trouble parsing this in Bayesian terms but I think you’re committing a rationalist sin by using an event that your model of reality couldn’t predict in advance as evidence that your model of reality is correct.
I strongly agree that we need to put less weight on arguments but I think falsifiability is impractical in everyday situations.
S1) Most smart people aren’t rational but most rational people are smart
D1) There are people of average intelligence with common sense
S2) Yes they have good intuition but you cannot trust them with counter-intuitive subjects (people with average intelligence are not rational)
D2) You can’t trust smart people with counter-intuitive subjects either (smart people aren’t rational)
D2) does not contradict S1 because “most smart people aren’t rational” isn’t the same as “most rational people aren’t smart”, which is of course the main point of S1).
Interesting article, it confirms my personal experiences in corporations. However, I think the real problem is deeper than smart people being able to rationalize anything. The real problem is that overconfidence and rationalizing your actions makes becoming a powerful decision-maker easier. The mistakes they make due to irrationality don’t catch up with them until after the damage is done, and then the next overconfident guy gets selected.