Voted up, but calling them “nerds” in reply is equally ad-hominem, ya know. Let’s just say that they don’t seem to have the very high skill level required to distinguish good unusual beliefs from bad unusual beliefs, yet. (Nor even the realization that this is a hard problem, yet.)
Yes, they’re pretty softcore by LessWrongian standards but places like this are where advanced rationalists are recruited from, so if someone is making a sincere effort in the direction of Traditional Rationality, it’s worthwhile trying to avoid offending them when they make probability-theoretic errors. Even if they mock you first.
Also, one person on RationalWiki saying silly things is not a good reason to launch an aggressive counterattack on a whole wiki containing many potential recruits.
Yes, they’re pretty softcore by LessWrongian standards but places like this are where advanced rationalists are recruited from, so if someone is making a sincere effort in the direction of Traditional Rationality, it’s worthwhile trying to avoid offending them when they make probability-theoretic errors. Even if they mock you first.
I guess I should try harder to remember this, in the context of my rather discouraging recent foray into the Richard Dawkins Forums—which, I admit, had me thinking twice about whether affiliation with “rational” causes was at all a useful indicator of actual receptivity to argument, and wondering whether there was much more point in visiting a place like that than a generic Internet forum. (My actual interlocutors were in fact probably hopeless, but maybe I could have done a favor to a few lurkers by not giving up so quickly.)
But, you know, it really is frustrating how little of the quality of a person (like Richard Dawkins, or, say, Paul Graham) or a cause (like increasing rationality, or improving science education) actually manages to rub off or trickle down onto the legions of Internet followers of said person or cause.
But, you know, it really is frustrating how little of the quality of a person (like Richard Dawkins, or, say, Paul Graham) or a cause (like increasing rationality, or improving science education) actually manages to rub off or trickle down onto the legions of Internet followers of said person or cause.
This is actually one of Niven’s Laws: “There is no cause so right that one cannot find a fool following it.”
You understand this is more or less exactly the problem that Less Wrong was designed to solve.
Is there any information on how the design was driven by the problem?
For example, I see a karma system, a hierarchical discussion that lets me fold and unfold articles, and lots of articles by Eliezer. I’ve seen similar technical features elsewhere, such as Digg and SlashDot, so I’m confused about whether the claim is that this specific technology is solving the problem of having a ton of clueless followers, or the large number of articles from Eliezer, or something else.
not to detract, but does Richard Dawkins really posses such ‘high quality’? IMO his arguments are good as a gateway for aspiring rationalists, not that far above the sanity water line
that, or it might be a problem of forums in general ..
Dawkins is a very high-quality thinker, as his scientific writings reveal. The fact that he has also published “elementary” rationalist material in no way takes away from this.
He’s way, way far above the level represented by the participants in his namesake forum.
(I’d give even odds that EY could persuade him to sign up for cryonics in an hour or less.)
I was thinking: “Bloggingheads implies the participants believe they are within a few degrees of status of each other”. It’d definitely be one worth a viewing or two!
Here’s Dawkins on some non socially-reinforced views: AI, psychometrics, and quantum mechanics (in the last 2 minutes, saying MWI is slightly less weird than Copenhagen, but that the proliferation of branches is uneconomical).
you’re absolutely right, I didn’t consider his scientific writings, though my argument still weakly stands since I wasn’t talking about that, he’s a good scientist, but a rationalist of say Eliezer’s level? I somehow doubt that.
(my bias is that he hasn’t gone beyond the ‘debunking the gods’ phase in his not specifically scientific writings, and here I’ll admit I haven’t read much of him.)
Read his scientific books, and listen to his lectures and conversations. Pay attention to the style of argumentation he uses, as contrasted with other writers on similar topics (e.g. Gould). What you will find is that beautiful combination of clarity, honesty, and—importantly—abstraction that is the hallmark of an advanced rationalist.
The “good scientist, but not good rationalist” type utterly fails to match him. Dawkins is not someone who compartmentalizes, or makes excuses for avoiding arguments. He also seems to have a very good intuitive understanding of probability theory—even to the point of “getting” the issue of many-worlds.
I would indeed put him near Eliezer in terms of rationality skill-level.
Most of Dawkins’ output predates the extreme rationality movement. Few scientists actually study rational thought—it seems as though the machine intelligence geeks and some of their psycholgist friends have gone some way beyond what is needed for everyday science.
Again, it’s not just the fact that he does science; it’s the way he does science.
Having skill as a rationalist is distinct from specializing in rationality as one’s area of research. Dawkins’ writings aren’t on rational thought (for the most part); they’re examples of rational thought.
I was actually considering writing a post about the term “Middle World”—an excellent tool for capturing a large space of consistent weaknesses in human intuitions.
I was expecting him to write like the posts here..ie. about rationality etc, but you make a good point.
consequentially I was browsing the archives a while ago and found this, now it is three ears old, but form the comments of Barkley_Rosser-mainly- it appears Gould didn’t exactly “[undo] the last thirty years of progress in his depiction of the field he was criticizing”
I used cryonics as example because komponisto used it before me. I intended my question to be more general. “If you’re trying to market LW, or ideas commonly discussed here, then which celebrities and opinion-leaders should you focus on?”
Convincing Dawkins would be a great strategy for promoting cryonics… who else should the community focus on convincing?
Friends and family. They are the ones I care about most. (And, most likely, those that others in the community care about most too. At least the friends part. Family is less certain but more significant.)
Sure, convince those you love. I was asking who you should try to convince if your goal is convincing someone who will themselves convince a lot of other people.
it really is frustrating how little of the quality of a person [...] actually manages to rub off
Wait, you have a model which says it should?
You don’t learn from a person merely by associating with them. And:
onto the legions of Internet followers of said person or cause.
I would bet a fair bit that this is the source of your frustration, right there: scale. You can learn from a person by directly interacting with them, and sometimes by interacting with people who learned from them. Beyond that, it seems to me that you get “dilution effects”, kicking in as soon as you grow faster than some critical pace at which newcomers have enough time to acculturate and turn into teachers.
Communities of inquiry tend to be victims of their own success. The smarter communities recognize this, anticipate the consequences, and adjust their design around them.
Interesting. Hom many places have you brought this issue up? Is there any forum which has responded rationally? What seem to be the controlling biases?
LW is thus far the only forum on which I have personally initiated discussion of this topic; but obviously I’ve followed discussions about it in numerous other places.
Is there any forum which has responded rationally?
You’re on it.
I mean, there are plenty of instances elsewhere of people getting the correct answer. But basically what you get is either selection bias (the forum itself takes a position, and people are there because they already agree) or the type of noisy mess we see at RDF. To date, LW is the only place I know of where an a priori neutral community has considered the question and then decisively inclined in the right direction.
What seem to be the controlling biases?
In the case of RDF, I suspect compartmentalization is at work: this topic isn’t mentally filed under “rationality”, and there’s no obvious cached answer or team to cheer for. So people there revert to the same ordinary, not-especially-careful default modes of thinking used by the rest of humanity, which is why the discussion there looks just like the discussions everywhere else.
It’s noteworthy that my references and analogies to concepts and arguments discussed by Dawkins himself had no effect; apparently, we were just in a sort of separate magisterium. Particularly telling was this quote:
You are claiming that the issue of gods existence has been the subject of a major international trial, where a jury found that god existed? When did that happen?
Now on the face of it this seems utterly dishonest: I hardly think this fellow would actually be tempted to convert to theism upon hearing the news that eight Perugians had been convinced of God’s existence. But I suspect he’s actually just trying to express the separation that apparently exists in his mind between the kind of reasoning that applies to questions about God and the kind of reasoning that applies to questions about a criminal case.
Technical nitpick on the use of ‘a priori’ in the context. Subject to possible contradiction if I have missed a nuance in the meaning in the statistics context).
Yes, they’re pretty softcore by LessWrongian standards but places like this are where advanced rationalists are recruited from, so if someone is making a sincere effort in the direction of Traditional Rationality, it’s worthwhile trying to avoid offending them when they make probability-theoretic errors.
(As an extreme example, a few weeks idly checking out RationalWiki led me to the quote at the top of this page and only a few months after that I was at SIAI.)
Voted up, but calling them “nerds” in reply is equally ad-hominem, ya know. Let’s just say that they don’t seem to have the very high skill level required to distinguish good unusual beliefs from bad unusual beliefs, yet. (Nor even the realization that this is a hard problem, yet.)
Yes, they’re pretty softcore by LessWrongian standards but places like this are where advanced rationalists are recruited from, so if someone is making a sincere effort in the direction of Traditional Rationality, it’s worthwhile trying to avoid offending them when they make probability-theoretic errors. Even if they mock you first.
Also, one person on RationalWiki saying silly things is not a good reason to launch an aggressive counterattack on a whole wiki containing many potential recruits.
I guess I should try harder to remember this, in the context of my rather discouraging recent foray into the Richard Dawkins Forums—which, I admit, had me thinking twice about whether affiliation with “rational” causes was at all a useful indicator of actual receptivity to argument, and wondering whether there was much more point in visiting a place like that than a generic Internet forum. (My actual interlocutors were in fact probably hopeless, but maybe I could have done a favor to a few lurkers by not giving up so quickly.)
But, you know, it really is frustrating how little of the quality of a person (like Richard Dawkins, or, say, Paul Graham) or a cause (like increasing rationality, or improving science education) actually manages to rub off or trickle down onto the legions of Internet followers of said person or cause.
This is actually one of Niven’s Laws: “There is no cause so right that one cannot find a fool following it.”
You understand this is more or less exactly the problem that Less Wrong was designed to solve.
Is there any information on how the design was driven by the problem?
For example, I see a karma system, a hierarchical discussion that lets me fold and unfold articles, and lots of articles by Eliezer. I’ve seen similar technical features elsewhere, such as Digg and SlashDot, so I’m confused about whether the claim is that this specific technology is solving the problem of having a ton of clueless followers, or the large number of articles from Eliezer, or something else.
not to detract, but does Richard Dawkins really posses such ‘high quality’? IMO his arguments are good as a gateway for aspiring rationalists, not that far above the sanity water line
that, or it might be a problem of forums in general ..
Dawkins is a very high-quality thinker, as his scientific writings reveal. The fact that he has also published “elementary” rationalist material in no way takes away from this.
He’s way, way far above the level represented by the participants in his namesake forum.
(I’d give even odds that EY could persuade him to sign up for cryonics in an hour or less.)
Bloggingheads are exactly 60 minutes.
To be fair, I’d expect it to be a lot harder with an audience.
Exactly what I was thinking.
I was thinking: “Bloggingheads implies the participants believe they are within a few degrees of status of each other”. It’d definitely be one worth a viewing or two!
Here’s Dawkins on some non socially-reinforced views: AI, psychometrics, and quantum mechanics (in the last 2 minutes, saying MWI is slightly less weird than Copenhagen, but that the proliferation of branches is uneconomical).
Obviously the most you could persuade him of would be that he should look into it.
you’re absolutely right, I didn’t consider his scientific writings, though my argument still weakly stands since I wasn’t talking about that, he’s a good scientist, but a rationalist of say Eliezer’s level? I somehow doubt that.
(my bias is that he hasn’t gone beyond the ‘debunking the gods’ phase in his not specifically scientific writings, and here I’ll admit I haven’t read much of him.)
Read his scientific books, and listen to his lectures and conversations. Pay attention to the style of argumentation he uses, as contrasted with other writers on similar topics (e.g. Gould). What you will find is that beautiful combination of clarity, honesty, and—importantly—abstraction that is the hallmark of an advanced rationalist.
The “good scientist, but not good rationalist” type utterly fails to match him. Dawkins is not someone who compartmentalizes, or makes excuses for avoiding arguments. He also seems to have a very good intuitive understanding of probability theory—even to the point of “getting” the issue of many-worlds.
I would indeed put him near Eliezer in terms of rationality skill-level.
Most of Dawkins’ output predates the extreme rationality movement. Few scientists actually study rational thought—it seems as though the machine intelligence geeks and some of their psycholgist friends have gone some way beyond what is needed for everyday science.
Again, it’s not just the fact that he does science; it’s the way he does science.
Having skill as a rationalist is distinct from specializing in rationality as one’s area of research. Dawkins’ writings aren’t on rational thought (for the most part); they’re examples of rational thought.
I was actually considering writing a post about the term “Middle World”—an excellent tool for capturing a large space of consistent weaknesses in human intuitions.
I was expecting him to write like the posts here..ie. about rationality etc, but you make a good point. consequentially I was browsing the archives a while ago and found this, now it is three ears old, but form the comments of Barkley_Rosser-mainly- it appears Gould didn’t exactly “[undo] the last thirty years of progress in his depiction of the field he was criticizing”
not that I want to revive that old thread.
Convincing Dawkins would be a great strategy for promoting cryonics… who else should the community focus on convincing?
Excusemewhat, the community, as in LW? We’re a cryonics advocacy group now?
I used cryonics as example because komponisto used it before me. I intended my question to be more general. “If you’re trying to market LW, or ideas commonly discussed here, then which celebrities and opinion-leaders should you focus on?”
Friends and family. They are the ones I care about most. (And, most likely, those that others in the community care about most too. At least the friends part. Family is less certain but more significant.)
Sure, convince those you love. I was asking who you should try to convince if your goal is convincing someone who will themselves convince a lot of other people.
Wait, you have a model which says it should?
You don’t learn from a person merely by associating with them. And:
I would bet a fair bit that this is the source of your frustration, right there: scale. You can learn from a person by directly interacting with them, and sometimes by interacting with people who learned from them. Beyond that, it seems to me that you get “dilution effects”, kicking in as soon as you grow faster than some critical pace at which newcomers have enough time to acculturate and turn into teachers.
Communities of inquiry tend to be victims of their own success. The smarter communities recognize this, anticipate the consequences, and adjust their design around them.
Bad ones certainly seem to. Perhaps the high quality person at least leaves less room for the negative influences?
Interesting. Hom many places have you brought this issue up? Is there any forum which has responded rationally? What seem to be the controlling biases?
LW is thus far the only forum on which I have personally initiated discussion of this topic; but obviously I’ve followed discussions about it in numerous other places.
You’re on it.
I mean, there are plenty of instances elsewhere of people getting the correct answer. But basically what you get is either selection bias (the forum itself takes a position, and people are there because they already agree) or the type of noisy mess we see at RDF. To date, LW is the only place I know of where an a priori neutral community has considered the question and then decisively inclined in the right direction.
In the case of RDF, I suspect compartmentalization is at work: this topic isn’t mentally filed under “rationality”, and there’s no obvious cached answer or team to cheer for. So people there revert to the same ordinary, not-especially-careful default modes of thinking used by the rest of humanity, which is why the discussion there looks just like the discussions everywhere else.
It’s noteworthy that my references and analogies to concepts and arguments discussed by Dawkins himself had no effect; apparently, we were just in a sort of separate magisterium. Particularly telling was this quote:
Now on the face of it this seems utterly dishonest: I hardly think this fellow would actually be tempted to convert to theism upon hearing the news that eight Perugians had been convinced of God’s existence. But I suspect he’s actually just trying to express the separation that apparently exists in his mind between the kind of reasoning that applies to questions about God and the kind of reasoning that applies to questions about a criminal case.
Technical nitpick on the use of ‘a priori’ in the context. Subject to possible contradiction if I have missed a nuance in the meaning in the statistics context).
I would have just gone with ‘previously’.
(As an extreme example, a few weeks idly checking out RationalWiki led me to the quote at the top of this page and only a few months after that I was at SIAI.)
I only just noticed this. Good Lord. (I put that quote there, so you’re my fault.)
Point taken.
The realization that, as a human, you have something called an irrationality problem is both important and rare.