I imagine that something of a similar sentiment animates much of popular hostility to LessWrong-style rationalism.
I’m not convinced. I know a few folks who know about LW and actively dislike it; when I try to find out what it is they dislike about it, I’ve heard things like —
LW people are personally cold, or idealize being unemotional and criticize others for having emotional or aesthetic responses;
LW teaches people to rationalize more effectively their existing prejudices — similar to Eliezer’s remarks in Knowing About Biases Can Hurt People;
LW-folk are overly defensive of LW-ideas, hold unreasonably high standards of evidence for disagreement with them, and dismiss any disagreement that can’t meet those standards as a sign of irrationality;
LW has an undercurrent of manipulation, or seems to be trying to trick people into supporting something sinister (although this person could not say what that hidden goal was, which implies that it’s something less overt than “build Friendly AI and take over — er, optimize — the world”);
LW is a support network for Eliezer’s approaches to superhuman AI / the Singularity, and Eliezer is personally not trustworthy as a leader of that project;
LW-folk excessively revere intelligence over other positive traits of humans, and subscribe to the notion that more intelligent people should dominate others; or that people who don’t fit a narrow definition of high intelligence are unworthy, possibly even unworthy of life;
LW-folk seem to believe that if you buy into Bayesian epistemology, you must buy into the rest of the LW memeplex, or that all the other ideas of LW are “Bayesian”;
LW tolerates weird censorship that appears to be motivated by magical thinking;
LW-folk just aren’t very nice or pleasant to be around;
LW openly contemplates proposals that good people consider obviously wrong and morally repulsive, and that is undesirable to be around. (This person described observing LW-folk discussing under what circumstances genocide might be moral. I don’t know where that discussion took place, whether it was on this site, another site, or in person.)
Yeesh. These people shouldn’t let feelings or appearances influence their opinions of EY’s trustworthiness—or “morally repulsive” ideas like justifications for genocide. That’s why I feel it’s perfectly rational to dismiss their criticisms—that and the fact that there’s no evidence backing up their claims. How can there be? After all, as I explain here, Bayesian epistemology is central to LW-style rationality and related ideas like Friendly AI and effective altruism. Frankly, with the kind of muddle-headed thinking those haters display, they don’t really deserve the insights that LW provides.
There, that’s 8 out of 10 bullet points. I couldn’t get the “manipulation” one in because “something sinister” is underspecified; as to the “censorship” one, well, I didn’t want to mention the… thing… (ooh, meta! Gonna give myself partial credit for that one.)
Ab, V qba’g npghnyyl ubyq gur ivrjf V rkcerffrq nobir; vg’f whfg n wbxr.
That was pretty subtle, actually. You had my blood boiling at the end of the first paragraph and I was about to downvote. Luckily I decided to read the rest.
That makes me more curious; I have the feeling there’s quite a bit of anti-geek/nerd sentiment among geeks/nerds, not just non-nerds.
(Not sure how to write the above sentence in a way that doesn’t sound like an implicit demand for more information! I recognize you might be unable or unwilling to elaborate on this.)
I’m not convinced. I know a few folks who know about LW and actively dislike it; when I try to find out what it is they dislike about it, I’ve heard things like —
LW people are personally cold, or idealize being unemotional and criticize others for having emotional or aesthetic responses;
LW teaches people to rationalize more effectively their existing prejudices — similar to Eliezer’s remarks in Knowing About Biases Can Hurt People;
LW-folk are overly defensive of LW-ideas, hold unreasonably high standards of evidence for disagreement with them, and dismiss any disagreement that can’t meet those standards as a sign of irrationality;
LW has an undercurrent of manipulation, or seems to be trying to trick people into supporting something sinister (although this person could not say what that hidden goal was, which implies that it’s something less overt than “build Friendly AI and take over — er, optimize — the world”);
LW is a support network for Eliezer’s approaches to superhuman AI / the Singularity, and Eliezer is personally not trustworthy as a leader of that project;
LW-folk excessively revere intelligence over other positive traits of humans, and subscribe to the notion that more intelligent people should dominate others; or that people who don’t fit a narrow definition of high intelligence are unworthy, possibly even unworthy of life;
LW-folk seem to believe that if you buy into Bayesian epistemology, you must buy into the rest of the LW memeplex, or that all the other ideas of LW are “Bayesian”;
LW tolerates weird censorship that appears to be motivated by magical thinking;
LW-folk just aren’t very nice or pleasant to be around;
LW openly contemplates proposals that good people consider obviously wrong and morally repulsive, and that is undesirable to be around. (This person described observing LW-folk discussing under what circumstances genocide might be moral. I don’t know where that discussion took place, whether it was on this site, another site, or in person.)
I wonder how these people who dislike LW feel about geeks/nerds in general.
Most of them are geeks/nerds in general, or at least have seen themselves as such at some point in their lives.
Yeesh. These people shouldn’t let feelings or appearances influence their opinions of EY’s trustworthiness—or “morally repulsive” ideas like justifications for genocide. That’s why I feel it’s perfectly rational to dismiss their criticisms—that and the fact that there’s no evidence backing up their claims. How can there be? After all, as I explain here, Bayesian epistemology is central to LW-style rationality and related ideas like Friendly AI and effective altruism. Frankly, with the kind of muddle-headed thinking those haters display, they don’t really deserve the insights that LW provides.
There, that’s 8 out of 10 bullet points. I couldn’t get the “manipulation” one in because “something sinister” is underspecified; as to the “censorship” one, well, I didn’t want to mention the… thing… (ooh, meta! Gonna give myself partial credit for that one.)
Ab, V qba’g npghnyyl ubyq gur ivrjf V rkcerffrq nobir; vg’f whfg n wbxr.
That was pretty subtle, actually. You had my blood boiling at the end of the first paragraph and I was about to downvote. Luckily I decided to read the rest.
That makes me more curious; I have the feeling there’s quite a bit of anti-geek/nerd sentiment among geeks/nerds, not just non-nerds.
(Not sure how to write the above sentence in a way that doesn’t sound like an implicit demand for more information! I recognize you might be unable or unwilling to elaborate on this.)