The fact that you can be so wrong about what I have said, when it’s all there in writing, seems to me to be strong evidence that you just don’t know how to think carefully about arguments when they engage your emotions, which is what I was initially saying in my critical comments to your first post.
Whatever WrongBot’s failings as a poster may or may not be, I haven’t seen anything in his posts to suggest that the problem is arguments engaging his emotions. You’ve expressed the opinion that WrongBot can’t reason, and perhaps this comment is evidence for that (although I think I understand the point he is trying to make), but I don’t perceive the connection to emotions. It is, of course, possible that I’ve missed something demonstrating that his emotions are at the root of any failures he has exhibited.
At any rate, it seems to me that there are any number of posters on LW who’ve exhibited reasoning failures at one time or another, and I don’t understand why you’ve focused on WrongBot to the extent of asking him to stop posting on LW until he can reason better. If anything, as a more or less independent observer, I feel like it is your focus on WrongBot that could be interpreted as some sort of an emotional response, although from my knowledge of you, I don’t think that’s actually the case.
I don’t know how to surf and that doesn’t make me a bad person. It does mean, however, that I don’t really belong in a surfing club. It’s cool for me to watch surfing if that’s how I want to spend my time, and maybe to learn how to surf by doing so, but it’s not fair to the other surfers if I get out there in the ocean with them and start taking up space on the waves doing the sorts of stunts that the best surfers in the club are doing without making a serious attempt to overcome the Dunning-Kruger effect. If too many people start doing that, there will be a serious injury and the club will be shut down.
I disagree that posts from WrongBot, or others with similar failings in reasoning (although I think you and I also disagree about how serious WrongBot’s failings are), pose a serious risk to LessWrong as a community, although I also see how reasonable minds might differ on that. From my perspective, I don’t think we’ve reached the point where LW is so crowded with posts that good ideas and posts are being crowded out by bad ones. More fundamentally, I don’t see any real problem with posts like WrongBot’s, even if they exhibit imperfect reasoning. Even poorly reasoned posts can lead to interesting discussions, as I think WrongBot’s posts have.
Even poorly reasoned posts can lead to interesting discussions, as I think WrongBot’s posts have.
Indeed. I upvoted this post and the other on this topic because they contained interesting information that was new to me, and since I “like and want more of that”, they deserve upvoting on that basis.
I do think that both posts contain a bit too much whaling on the strawman of “the standard narrative” and could do without it altogether, but at the same time I don’t see why people are so focused on arguing with that. It’s almost like a sacred cow is being threatened, or that WrongBot has previously been identified as an enemy outsider due to having supported polyamory.
(IOW, I see some of the reaction to WrongBot as greater evidence of emotional involvement by people other than WrongBot.)
I mostly agree with PJ. I found the book discussion in WrongBot’s posts interesting. His claims about evolutionary psychology and its “standard narrative” were half-baked, but I attribute that to him not having done enough homework on these subjects. There is a lot of bad information on evolutionary psychology out there which seems to have biased WrongBot, and combined with his values, makes him vulnerable to claims that evolutionary psychology “does not acknowledge the mutability of human preference” (see this book review for more debunking).
I’m quite confident that WrongBot is a good enough rationalist that he will update when exposed to more evidence.The kinds of errors he is making are the typical errors that intelligent human rationalists can make when they are first approaching subjects that they don’t know much about, when they have been exposed to biased information and hold values in those areas. I think he has been misled and is under-informed on the topic of evolutionary psychology, rather than being fundamentally biased by his emotions. I recommend that he read more on the subject, and not just popular books. While I will urge him to do more research before making his own speculations on these subjects in ways that go beyond summarizing, I don’t think his posts are in any way a threat to LessWrong, and I would be interested in continued posting from him in the future. The assessments of MichaelVassar and rhollerith’s friend seem overly harsh.
I do think that both posts contain a bit too much whaling on the strawman of “the standard narrative” and could do without it altogether, but at the same time I don’t see why people are so focused on arguing with that.
The reason is that there is a long history of people being wrong in criticizing evolutionary psychology, and rationalists should be able to do better. I’m interested in real scrutiny of the field, not recycled criticisms that have already been answered by evolutionary psychologists over a decade ago (see the link to that book review for an example), or the resurrection of debunked positions that most mainstream evolutionary psychologists don’t hold anymore.
For what it’s worth, all that talk about (and emphasis on) the standard narrative comes from Sex at Dawn. I don’t think it’s representative of all (or even most) current thought in evolutionary psychology, though there are some discussions on LW that have been framed in its terms.
In any case, point taken. I’ll shut up about it in my remaining posts in the sequence.
I see some of the reaction to WrongBot as greater evidence of emotional involvement by people other than WrongBot.
For example, people’s emotional involvement with Malthus’s assertion that human populations increase at an exponential rate absent limits on resources :) ?
ADDED. I retract this comment since (I now realize) PJ wrote some of the reaction, and obviously I cannot refute what PJ wrote by listing instances in which the reaction was justified on rational grounds.
I disagree that posts from WrongBot, or others with similar failings in reasoning . . . pose a serious risk to LessWrong as a community.
How much experience have you had watching the trajectory of online communities?
Have you for example informed yourself of the case of Reddit (the original one) which is particularly relevant to this community in that the software is so similar?
I have not, but Paul Graham has (since he was an investor in Reddit) and he has stated many times that he believes that his community, Hacker News, is in constant danger of falling prey to the dynamic that rendered Reddit worthless to thoughtful busy people, and he has taken many different measures, including banning a user relatively frequently, denying new users the right to cast downvotes—or any votes at all if their karma is low enough—and disappearing the “reply” link on certain posts based on an algorithm.
How much experience have you had watching the trajectory of online communities?
Is anyone aware of any good write-ups on this topic? I’d be interested in seeing any insights as to why things happen the way they do, and what we can do to improve matters.
To answer my own question, here are a few write-ups I found about why the quality of an online community tends to decline over time, and what can be done about this problem:
Also, the book “The Virtual Community” that Richard Hollerith mentioned is available online, although I wasn’t able to find much information in it about the specific topic at hand.
Howard Reingold’s book with the string “Virtual Community” in it. Old though: 1994 or so, but very informative on pre-Internet communities. The chapter on France’s Minitel (term?) I found particularly valuable.
I’ve been participating in online communities since 1992, and most of my information has come from short comments by people trying to preserve the character of specific communities. Paul Graham’s comments on Hacker News are particularly worthwhile, but have not been collected in any one place.
How much experience have you had watching the trajectory of online communities?
Have you for example informed yourself of the case of Reddit (the original one) which is particularly relevant to this community in that the software is so similar?
I have not studied this with any rigor, although I have seen communities that I previously enjoyed enter periods of decline (sometimes recovering at a later point, sometimes not). I don’t disagree that with online communities, there is often some tipping point when the bad reasoning/noise outweighs the good. That’s why I also made this part of my comment:
From my perspective, I don’t think we’ve reached the point where LW is so crowded with posts that good ideas and posts are being crowded out by bad ones.
Perhaps I’m wrong about this. At any rate, if LW is actually in a serious period of decline, the problem is more serious than just WrongBot, and I disagree with implementing a solution where individual posters take it upon themselves to ask other posters to leave. (If EY wants to create some sort of system like Paul Graham’s or make new moderators with these sorts powers, that would be different in my view than this sort of ad hoc approach, which doesn’t seem likely to work (due to both its ad hoc nature and unenforceability) and also presents greater risks of abuse, decisions based on personality conflict, etc.)
if LW is actually in a serious period of decline, the problem is more serious than just WrongBot
Agreed. In particular, LW has successfully weathered long flurries of comments and posts by people worse than WrongBot.
The primary sign that LW is in danger of becoming the kind of place that I and those I admire no longer want to visit is the (negative) magnitude of the score on comments asking WrongBot to stop writing on things beyond his skill and the (positive) magnitude of the scores of WrongBot’s replies to those (negatively scored) comments. That is new.
Note that the vast majority of readers of LW never attempt to create evolutionary arguments relevant to human behavior or summarize novel arguments made by others. I would hope that that is because they realize that it is too difficult for them.
Nobody can downvote on Hacker News. The only vaguely analogous function is “flag” which leads to posts (not comments) being killed or marked for killing.
(Edit:rhollerithdotcom points out correctly that this is only true for submissions and that above a karma threshold comments are downvotable)
Nobody can downvote on Hacker News. The only vaguely analogous function is “flag”
If one has enough karma (ISTR the threshhold being 200 points at one point, though it has probably been raised a few times since then) one can downvote comments or else how to explain the presence of comments with negative scores in almost every comment section.
You might be right about top-level submissions though.
At any rate, it seems to me that there are any number of posters on LW who’ve exhibited reasoning failures at one time or another, and I don’t understand why you’ve focused on WrongBot to the extent of asking him to stop posting on LW until he can reason better.
I’ve also been finding this to be incredibly confusing. I feel like I must have done something to terribly offend him, but I have no idea what that might have been. And many of his comments aren’t consistent with that hypothesis, so that’s probably not it.
I just went and looked through his comment history for replies he’s made to me that might explain this appearance of personal antagonism, and now I’m more confused. This was the first time he replied to a comment I’d made. Key line:
Thank you! I’m so happy to have a community where things like this happen.
So maybe I’m being singled out for not living up to a promising first impression? I really have no idea. Michael Vassar is probably the only one who could answer the question with any confidence.
Whatever WrongBot’s failings as a poster may or may not be, I haven’t seen anything in his posts to suggest that the problem is arguments engaging his emotions. You’ve expressed the opinion that WrongBot can’t reason, and perhaps this comment is evidence for that (although I think I understand the point he is trying to make), but I don’t perceive the connection to emotions. It is, of course, possible that I’ve missed something demonstrating that his emotions are at the root of any failures he has exhibited.
At any rate, it seems to me that there are any number of posters on LW who’ve exhibited reasoning failures at one time or another, and I don’t understand why you’ve focused on WrongBot to the extent of asking him to stop posting on LW until he can reason better. If anything, as a more or less independent observer, I feel like it is your focus on WrongBot that could be interpreted as some sort of an emotional response, although from my knowledge of you, I don’t think that’s actually the case.
I disagree that posts from WrongBot, or others with similar failings in reasoning (although I think you and I also disagree about how serious WrongBot’s failings are), pose a serious risk to LessWrong as a community, although I also see how reasonable minds might differ on that. From my perspective, I don’t think we’ve reached the point where LW is so crowded with posts that good ideas and posts are being crowded out by bad ones. More fundamentally, I don’t see any real problem with posts like WrongBot’s, even if they exhibit imperfect reasoning. Even poorly reasoned posts can lead to interesting discussions, as I think WrongBot’s posts have.
Indeed. I upvoted this post and the other on this topic because they contained interesting information that was new to me, and since I “like and want more of that”, they deserve upvoting on that basis.
I do think that both posts contain a bit too much whaling on the strawman of “the standard narrative” and could do without it altogether, but at the same time I don’t see why people are so focused on arguing with that. It’s almost like a sacred cow is being threatened, or that WrongBot has previously been identified as an enemy outsider due to having supported polyamory.
(IOW, I see some of the reaction to WrongBot as greater evidence of emotional involvement by people other than WrongBot.)
I mostly agree with PJ. I found the book discussion in WrongBot’s posts interesting. His claims about evolutionary psychology and its “standard narrative” were half-baked, but I attribute that to him not having done enough homework on these subjects. There is a lot of bad information on evolutionary psychology out there which seems to have biased WrongBot, and combined with his values, makes him vulnerable to claims that evolutionary psychology “does not acknowledge the mutability of human preference” (see this book review for more debunking).
I’m quite confident that WrongBot is a good enough rationalist that he will update when exposed to more evidence.The kinds of errors he is making are the typical errors that intelligent human rationalists can make when they are first approaching subjects that they don’t know much about, when they have been exposed to biased information and hold values in those areas. I think he has been misled and is under-informed on the topic of evolutionary psychology, rather than being fundamentally biased by his emotions. I recommend that he read more on the subject, and not just popular books. While I will urge him to do more research before making his own speculations on these subjects in ways that go beyond summarizing, I don’t think his posts are in any way a threat to LessWrong, and I would be interested in continued posting from him in the future. The assessments of MichaelVassar and rhollerith’s friend seem overly harsh.
The reason is that there is a long history of people being wrong in criticizing evolutionary psychology, and rationalists should be able to do better. I’m interested in real scrutiny of the field, not recycled criticisms that have already been answered by evolutionary psychologists over a decade ago (see the link to that book review for an example), or the resurrection of debunked positions that most mainstream evolutionary psychologists don’t hold anymore.
For what it’s worth, all that talk about (and emphasis on) the standard narrative comes from Sex at Dawn. I don’t think it’s representative of all (or even most) current thought in evolutionary psychology, though there are some discussions on LW that have been framed in its terms.
In any case, point taken. I’ll shut up about it in my remaining posts in the sequence.
For example, people’s emotional involvement with Malthus’s assertion that human populations increase at an exponential rate absent limits on resources :) ?
ADDED. I retract this comment since (I now realize) PJ wrote some of the reaction, and obviously I cannot refute what PJ wrote by listing instances in which the reaction was justified on rational grounds.
Er, you did see the word some in there, right?
Upvoted, and grandparent retracted.
How much experience have you had watching the trajectory of online communities?
Have you for example informed yourself of the case of Reddit (the original one) which is particularly relevant to this community in that the software is so similar?
I have not, but Paul Graham has (since he was an investor in Reddit) and he has stated many times that he believes that his community, Hacker News, is in constant danger of falling prey to the dynamic that rendered Reddit worthless to thoughtful busy people, and he has taken many different measures, including banning a user relatively frequently, denying new users the right to cast downvotes—or any votes at all if their karma is low enough—and disappearing the “reply” link on certain posts based on an algorithm.
Is anyone aware of any good write-ups on this topic? I’d be interested in seeing any insights as to why things happen the way they do, and what we can do to improve matters.
To answer my own question, here are a few write-ups I found about why the quality of an online community tends to decline over time, and what can be done about this problem:
A Group Is Its Own Worst Enemy by Clay Shirky
What I’ve Learned from Hacker News by Paul Graham
Trolls by Paul Graham
Well-Kept Gardens Die By Pacifism by Eliezer Yudkowsky
Also, the book “The Virtual Community” that Richard Hollerith mentioned is available online, although I wasn’t able to find much information in it about the specific topic at hand.
Howard Reingold’s book with the string “Virtual Community” in it. Old though: 1994 or so, but very informative on pre-Internet communities. The chapter on France’s Minitel (term?) I found particularly valuable.
I’ve been participating in online communities since 1992, and most of my information has come from short comments by people trying to preserve the character of specific communities. Paul Graham’s comments on Hacker News are particularly worthwhile, but have not been collected in any one place.
I have not studied this with any rigor, although I have seen communities that I previously enjoyed enter periods of decline (sometimes recovering at a later point, sometimes not). I don’t disagree that with online communities, there is often some tipping point when the bad reasoning/noise outweighs the good. That’s why I also made this part of my comment:
Perhaps I’m wrong about this. At any rate, if LW is actually in a serious period of decline, the problem is more serious than just WrongBot, and I disagree with implementing a solution where individual posters take it upon themselves to ask other posters to leave. (If EY wants to create some sort of system like Paul Graham’s or make new moderators with these sorts powers, that would be different in my view than this sort of ad hoc approach, which doesn’t seem likely to work (due to both its ad hoc nature and unenforceability) and also presents greater risks of abuse, decisions based on personality conflict, etc.)
Agreed. In particular, LW has successfully weathered long flurries of comments and posts by people worse than WrongBot.
The primary sign that LW is in danger of becoming the kind of place that I and those I admire no longer want to visit is the (negative) magnitude of the score on comments asking WrongBot to stop writing on things beyond his skill and the (positive) magnitude of the scores of WrongBot’s replies to those (negatively scored) comments. That is new.
Note that the vast majority of readers of LW never attempt to create evolutionary arguments relevant to human behavior or summarize novel arguments made by others. I would hope that that is because they realize that it is too difficult for them.
Nobody can downvote on Hacker News. The only vaguely analogous function is “flag” which leads to posts (not comments) being killed or marked for killing.
(Edit:rhollerithdotcom points out correctly that this is only true for submissions and that above a karma threshold comments are downvotable)
Useful essay on online communities http://www.kuro5hin.org/story/2009/3/12/33338/3000
If one has enough karma (ISTR the threshhold being 200 points at one point, though it has probably been raised a few times since then) one can downvote comments or else how to explain the presence of comments with negative scores in almost every comment section.
You might be right about top-level submissions though.
I’ve also been finding this to be incredibly confusing. I feel like I must have done something to terribly offend him, but I have no idea what that might have been. And many of his comments aren’t consistent with that hypothesis, so that’s probably not it.
I just went and looked through his comment history for replies he’s made to me that might explain this appearance of personal antagonism, and now I’m more confused. This was the first time he replied to a comment I’d made. Key line:
So maybe I’m being singled out for not living up to a promising first impression? I really have no idea. Michael Vassar is probably the only one who could answer the question with any confidence.