Well, this website is called “LessWrong”, so conforming to subjective values doesn’t seem all that important, and I didn’t expect to be punished for not doing so. I’ve read the rules, they say “Don’t think about it too hard” and “If we don’t like it, we will give you feedback”, but it seems like the rate-limiting was the feedback. They mention rate-limiting if you get too much negative karma, but this isn’t exactly true. My karma is net-positive, the negatives are just from older accounts, which seem to weight higher.
My personal bar is heavily based on the ratio of effort:value
While I agree, I want to ask you how many grade-school essays are worth one PhD thesis? If you ask me, even a million isn’t enough. If you “go up a level”, you basically render the previous level worthless. Raising the level will mean that one will make more mistakes, but when one is correct, the value is completely unlike just playing it safe and stating what’s already proven (which is an easy way to get karma which I don’t see the value of)
At a personal level, I kind of enjoy the kindness you show in looking down on me, and comments like yours are even elegant and pleasing to read, but I don’t think this is the most valuable aspect of comments, since topics presented here (like AGI) concern the future of humanity. The pace here is a little boring, and I do believe I’m being misunderstood (what I assume goes without saying and prune from my comments is probably the parts that people would nod along with and upvote me for, simply because they agree and enjoy reading what they already know).
And not to be rude, but couldn’t it be that some of the perceived noise is a false positive? I can absolutely defend everything I’ve posted so far. I also don’t belive it’s a virtue to keep quiet about the truth just because it’s unpleasant.
they may still be fall below the necessary return-on-effort
I think the extinction of human nature (and the dynamics involved) is quite important. Same with the sustainability of immaterialistic aspects of life and the possible degeneration of psychological development (resulting in populations which are unable to stand up for themselves). On this very comment section, another user writes “I sort of accidentally killed some parts of the animal that I am” as a consequence of reading the sequences. This is also one of the things that I’ve shown concern about in my comments, and which has been voted “false” by regular users.
My “worst” comment this month is [5,-5] and about Tiktok. I think people dislike it because of their personal bias and because they’re naive (naive as in the kind of people who believe “somebody think of the children!” is a genuine concern rather than a propaganda tactic). But admittedly, only the first paragraph was sufficiently clear and correct. Perhaps you will have to be less kind to me? I cannot guess my faults, so somebody would have to put aside the pity and be more direct with me
It feels like you want this conversation to be about your personal interactions with LessWrong. That makes sense, it would be my focus if I’d been rate limited. But having that converesation in public seems like a bad idea, and I’m not competent to do in public or private[1].
So let me ask: how do you think conversations about norms and moderation should go, given that mod decisions will inevitably cause pain to people affected by them, and “everyone walks away happy” is not an achievable goal?
In part because AFAIK I haven’t read your work. I checked your user page for the first 30 commenets and didn’t see any votes in either direction. I will say that if you know your comments are “too long and ranty, and they’re also hard to understand”, those all seem good to work on.
I can only comment every 48 hours, so I can’t write multiple comments such that I only communicate what concerns the person I’m responding to. Engaging is optional, no pressure from me (perhaps from yourself or the community?). I’m n=1 but still part of the sample of rate-limited users, so my case generalizes to the extent that I overlap with other people who are rate-limited (now or in the future)
I think people should take responsibility for their words, - if the real rules are unwritten, then those who broke them just did as they were told. The rules pretend to be based on objective metrics like “quality” rather than subjective virtues like following the consensus (which will feel objective from the inside for, say, 95% of people). There’s no pain from my end, but it’s easier in general to accept punishment when a reason is given or there’s a piece of criticism which the offending person will have to admit might be valid. Staff are just human too, but some of the reasoning seems lazy. New users are not “entitled to a reason” for being punished? But the entire point of punishment is teaching, and there’s literally no learning without feedback. Is giving feedback not advantageous to both parties?
By the way, the rate-limiting algorithm as I’ve understood it seems poor. It only takes one downvoted comment to get limited, So it doesn’t matter if a user leaves one good comment and one poor comment, or if they write 99 good comments and one poor comment. Older accounts seems exempt, but even if older accounts write comments worth of rate-limiting then the rules are too harsh, and if they don’t, then there’s no justification for making them except from these rules. (I’m aware the punishment is not 100% automated though). Edit: I’m clearly confused about the algorithm. Is it: iff ∃(poor comment) ∈ (most recent 20 comments) → rate limited from time of judgement until t+20 days? this seems wrong too.
My comments can be shorter or easier to understand, but not both. Most people will communicate big ideas by linking to them, linking 20 pages is much more acceptable than writing them in a comment. But these are my own ideas, there’s no links. The rest of the issues might be differences in taste rather than quality. Going against the consensus is *probably* enough to get one rate-limited, even if they’re correct, so if the website becomes an echo chamber, it can only be solved by somebody with a good reputation voicing their concerns from the inside (where it’s the most difficult to notice).
I’m one of the weirder users, though, I’m sure to be misunderstood. It worries me more that the other users were rate-limited. I can’t imagine a justification for doing so. If justifying it is easy, I think an explanation is proper, any user could drop it and mention it. If only the mod team can tell why these users were rate-limited, then it follows that the users made no obvious mistakes, from which it also follows that there’s very little that the targets (and even observers) can learn from all this.
Finally—I actually respect gatekeeping and high standards, but such rules should be visible. “When in Rome”—yeah, but what if the sign says “welcome to Italy”?. And I’m not convinced (thought I’d like to be) that I was punished by high standards rather than petty reasons like conformity, political values, or preferences owning to a lack of self-actualization.
First of all, thank you, this was exactly the type of answer I was hoping for. Also, if you still have the ability to comment freely on your short form, I’m happy to hop over there.
You’ve requested people stop sugarcoating so I’m going to be harsher than normal. I think the major disagreement lies here:
> But the entire point of punishment is teaching
I do not believe the mod team’s goal is to punish individuals. It is to gatekeep in service of keeping lesswrong’s quality high. Anyone who happens to emerge from that process making good contributions is a bonus, but not the goal.
How well is this signposted? The new user message says
I think that message was put in last summer but am not sure when. You might have joined before it went up (although then you would have been on the site when the equivalent post went up).
Going against the consensus is *probably* enough to get one rate-limited, even if they’re correct
For issues interesting enough to have this problem, there is no ground source of truth that humans can access. There is human judgement, and a long process that will hopefully lead to better understanding eventually. Mods or readers are not contacting an oracle, hearing a post is true, and downvoting it anyway because they dislike it. They’re reading content, deciding whether it is well formed (for regular karma) and if they agree with it (for agreement votes, and probably also regular karma, although IIRC the correlation between those was less than I expected. LessWrong voters love to upvote high quality things they disagree with).
If you have a system that is more truth tracking I would love to hear it and I’m sure the team would too. But any system will have to take into account the fact that there is no magical source of truth for many important questions, so power will ultimately rest on human judgement.
On a practical level:
My comments can be shorter or easier to understand, but not both. Most people will communicate big ideas by linking to them, linking 20 pages is much more acceptable than writing them in a comment. But these are my own ideas, there’s no links.
Easier to understand. LessWrong is more tolerant of length than most of the internet.
When I need to spend many pages on something boring and detailed, I often write a separate post for it, which I link to in the real post. I realize you’re rate limited, but rate limits don’t apply to comments on your own posts (short form is in a weird middle ground, but nothing stops you from creating your own post to write on). Or create your own blog elsewhere and link to it.
I do have access, I just felt like waiting and replying here. By the way, if I comment 20 times on my shortform, will the rate-limit stop? This feels like an obvious exploit in the rate-limiting algorithm, but it’s still possible that I don’t know how it works.
It is to gatekeep in service of keeping lesswrong’s quality high
Then outright banning would work better than rate-limiting without feedback like this. If people contribute in good faith, they need to know just what other people approve of. Vague feedback doesn’t help alignment very much. And while an eternal september is dangerous, you likely don’t want a community dominated by veteran users who are hostile to new users. I’ve seen this in videogame communities and it leads to forms of stagnation.
It confuses me if you got 10 upvotes for the contents of your reply (I can’t find fault with the writing, formatting and tone), but it’s easily explained by assuming that users here don’t act much differently than they do on Reddit, which would be sad.
I already read the new users guide. Perhaps I didn’t put it clearly enough with “I think people should take responsibility for their words”, but it was the new users guide which told me to post. I read the “Is LessWrong for you?” section, and it told me that LessWrong was likely for me. I read the “well-kept garden” post in the past and found myself agreeing with its message. This is why I felt mislead and why I don’t think linking these two sections makes for a good counter-argument (after all, I attempted to communicate that I had already taken them into account). I thought LW should take responsibility for what it told me, as trusting it is what got me rate-limited. That’s the core message, the rest of my reply just defends my approach of commenting.
For issues interesting enough to have this problem, there is no ground source of truth that humans can access
In order not to be misunderstood completely, I’d need a disclaimer like this at the top of every comment I make, which is clearly not feasible:
Humanity is somewhat rational now, but our shared knowledge is still filled with old errors which were made before we learned how to think. Many core assumptions are just wrong. But if these beliefs are corrected, then the cascade would collapse some of the beliefs that people hold dear, or touch upon controversial subjects. The truth doesn’t stand a chance against politics, morality and social norms. Sadly, if you want to prevent society from collapsing, you will need to grapple a bit with these three subjects. But that will very likely lead to downvotes.
A lot of things are poorly explained, but nonetheless true. Other things are very well argued, but nonetheless false. “Manifesting the future by visualizing it” is pseudoscience, but it has a positive utility. “We must make new laws to keep everyone safe” sounds reasonable, but after 1000 iterations it should have dawned on us that the 1001th law isn’t going to save us. I think that the reasonable sentence would net you positive karma on here, while the pseudoscience would get called worthless.
My logical intelligence is much higher than my verbal—and most people who are successful in social and academic areas of life are the complete opposite. Nonetheless, some of us can see patterns that other people just can’t. Human beings also have a lot in common with AI, we’re blackboxes. Our instincts are discriminatory and biased, but only because people who weren’t went extinct. Those who attempt to get rid of biases should first know what they are good for (Chesterton’s fence). But I can’t see a single movement in society advocating for change which actually understands what it’s doing. But people don’t like hearing this. As of right now, the blackbox (intuition, instinct, etc) is still smarter than the explainable truth. This will change as people are taught how to disregard the blackbox and even break it. But this also goes against the consensus (in a way that I assume it will be considered “bad quality”. Some people might upvote what they disagree with, but I don’t think that goes for many types of disagreement)
And I’m also only human. Rate-limited users are perhaps the bottom 5% of posters? But I’m above that. I’m just grappling with subjects which are beyond my level. You told me to read the rules, that’s a lot easier. I could also get lots of upvotes if I engaged with subjects that I’m overqualified for. But like with AGI, some subjects are beyond our abilities, but I don’t think we can’t afford to ignore them, so we’re forced to make fools of ourselves trying.
By the way, the rate-limiting algorithm as I’ve understood it seems poor. It only takes one downvoted comment to get limited, So it doesn’t matter if a user leaves one good comment and one poor comment, or if they write 99 good comments and one poor comment.
Automatic rate-limiting only uses the last 20 posts and comments, which can still be relatively harsh, but 99 good comments will definitely outweigh one poor comment.
Well, this website is called “LessWrong”, so conforming to subjective values doesn’t seem all that important, and I didn’t expect to be punished for not doing so. I’ve read the rules, they say “Don’t think about it too hard” and “If we don’t like it, we will give you feedback”, but it seems like the rate-limiting was the feedback. They mention rate-limiting if you get too much negative karma, but this isn’t exactly true. My karma is net-positive, the negatives are just from older accounts, which seem to weight higher.
While I agree, I want to ask you how many grade-school essays are worth one PhD thesis? If you ask me, even a million isn’t enough. If you “go up a level”, you basically render the previous level worthless. Raising the level will mean that one will make more mistakes, but when one is correct, the value is completely unlike just playing it safe and stating what’s already proven (which is an easy way to get karma which I don’t see the value of)
At a personal level, I kind of enjoy the kindness you show in looking down on me, and comments like yours are even elegant and pleasing to read, but I don’t think this is the most valuable aspect of comments, since topics presented here (like AGI) concern the future of humanity. The pace here is a little boring, and I do believe I’m being misunderstood (what I assume goes without saying and prune from my comments is probably the parts that people would nod along with and upvote me for, simply because they agree and enjoy reading what they already know).
And not to be rude, but couldn’t it be that some of the perceived noise is a false positive? I can absolutely defend everything I’ve posted so far. I also don’t belive it’s a virtue to keep quiet about the truth just because it’s unpleasant.
I think the extinction of human nature (and the dynamics involved) is quite important. Same with the sustainability of immaterialistic aspects of life and the possible degeneration of psychological development (resulting in populations which are unable to stand up for themselves). On this very comment section, another user writes “I sort of accidentally killed some parts of the animal that I am” as a consequence of reading the sequences. This is also one of the things that I’ve shown concern about in my comments, and which has been voted “false” by regular users.
My “worst” comment this month is [5,-5] and about Tiktok. I think people dislike it because of their personal bias and because they’re naive (naive as in the kind of people who believe “somebody think of the children!” is a genuine concern rather than a propaganda tactic). But admittedly, only the first paragraph was sufficiently clear and correct. Perhaps you will have to be less kind to me? I cannot guess my faults, so somebody would have to put aside the pity and be more direct with me
It feels like you want this conversation to be about your personal interactions with LessWrong. That makes sense, it would be my focus if I’d been rate limited. But having that converesation in public seems like a bad idea, and I’m not competent to do in public or private[1].
So let me ask: how do you think conversations about norms and moderation should go, given that mod decisions will inevitably cause pain to people affected by them, and “everyone walks away happy” is not an achievable goal?
In part because AFAIK I haven’t read your work. I checked your user page for the first 30 commenets and didn’t see any votes in either direction. I will say that if you know your comments are “too long and ranty, and they’re also hard to understand”, those all seem good to work on.
I can only comment every 48 hours, so I can’t write multiple comments such that I only communicate what concerns the person I’m responding to. Engaging is optional, no pressure from me (perhaps from yourself or the community?). I’m n=1 but still part of the sample of rate-limited users, so my case generalizes to the extent that I overlap with other people who are rate-limited (now or in the future)
I think people should take responsibility for their words, - if the real rules are unwritten, then those who broke them just did as they were told. The rules pretend to be based on objective metrics like “quality” rather than subjective virtues like following the consensus (which will feel objective from the inside for, say, 95% of people). There’s no pain from my end, but it’s easier in general to accept punishment when a reason is given or there’s a piece of criticism which the offending person will have to admit might be valid. Staff are just human too, but some of the reasoning seems lazy. New users are not “entitled to a reason” for being punished? But the entire point of punishment is teaching, and there’s literally no learning without feedback. Is giving feedback not advantageous to both parties?
By the way, the rate-limiting algorithm as I’ve understood it seems poor. It only takes one downvoted comment to get limited, So it doesn’t matter if a user leaves one good comment and one poor comment, or if they write 99 good comments and one poor comment. Older accounts seems exempt, but even if older accounts write comments worth of rate-limiting then the rules are too harsh, and if they don’t, then there’s no justification for making them except from these rules. (I’m aware the punishment is not 100% automated though).
Edit: I’m clearly confused about the algorithm. Is it: iff ∃(poor comment) ∈ (most recent 20 comments) → rate limited from time of judgement until t+20 days? this seems wrong too.
My comments can be shorter or easier to understand, but not both. Most people will communicate big ideas by linking to them, linking 20 pages is much more acceptable than writing them in a comment. But these are my own ideas, there’s no links. The rest of the issues might be differences in taste rather than quality. Going against the consensus is *probably* enough to get one rate-limited, even if they’re correct, so if the website becomes an echo chamber, it can only be solved by somebody with a good reputation voicing their concerns from the inside (where it’s the most difficult to notice).
I’m one of the weirder users, though, I’m sure to be misunderstood. It worries me more that the other users were rate-limited. I can’t imagine a justification for doing so. If justifying it is easy, I think an explanation is proper, any user could drop it and mention it. If only the mod team can tell why these users were rate-limited, then it follows that the users made no obvious mistakes, from which it also follows that there’s very little that the targets (and even observers) can learn from all this.
Finally—I actually respect gatekeeping and high standards, but such rules should be visible. “When in Rome”—yeah, but what if the sign says “welcome to Italy”?. And I’m not convinced (thought I’d like to be) that I was punished by high standards rather than petty reasons like conformity, political values, or preferences owning to a lack of self-actualization.
First of all, thank you, this was exactly the type of answer I was hoping for. Also, if you still have the ability to comment freely on your short form, I’m happy to hop over there.
You’ve requested people stop sugarcoating so I’m going to be harsher than normal. I think the major disagreement lies here:
> But the entire point of punishment is teaching
I do not believe the mod team’s goal is to punish individuals. It is to gatekeep in service of keeping lesswrong’s quality high. Anyone who happens to emerge from that process making good contributions is a bonus, but not the goal.
How well is this signposted? The new user message says
Followed by a crippling long New User Guide.
I think that message was put in last summer but am not sure when. You might have joined before it went up (although then you would have been on the site when the equivalent post went up).
For issues interesting enough to have this problem, there is no ground source of truth that humans can access. There is human judgement, and a long process that will hopefully lead to better understanding eventually. Mods or readers are not contacting an oracle, hearing a post is true, and downvoting it anyway because they dislike it. They’re reading content, deciding whether it is well formed (for regular karma) and if they agree with it (for agreement votes, and probably also regular karma, although IIRC the correlation between those was less than I expected. LessWrong voters love to upvote high quality things they disagree with).
If you have a system that is more truth tracking I would love to hear it and I’m sure the team would too. But any system will have to take into account the fact that there is no magical source of truth for many important questions, so power will ultimately rest on human judgement.
On a practical level:
Easier to understand. LessWrong is more tolerant of length than most of the internet.
When I need to spend many pages on something boring and detailed, I often write a separate post for it, which I link to in the real post. I realize you’re rate limited, but rate limits don’t apply to comments on your own posts (short form is in a weird middle ground, but nothing stops you from creating your own post to write on). Or create your own blog elsewhere and link to it.
Thanks for your reply!
I do have access, I just felt like waiting and replying here. By the way, if I comment 20 times on my shortform, will the rate-limit stop? This feels like an obvious exploit in the rate-limiting algorithm, but it’s still possible that I don’t know how it works.
Then outright banning would work better than rate-limiting without feedback like this. If people contribute in good faith, they need to know just what other people approve of. Vague feedback doesn’t help alignment very much. And while an eternal september is dangerous, you likely don’t want a community dominated by veteran users who are hostile to new users. I’ve seen this in videogame communities and it leads to forms of stagnation.
It confuses me if you got 10 upvotes for the contents of your reply (I can’t find fault with the writing, formatting and tone), but it’s easily explained by assuming that users here don’t act much differently than they do on Reddit, which would be sad.
I already read the new users guide. Perhaps I didn’t put it clearly enough with “I think people should take responsibility for their words”, but it was the new users guide which told me to post. I read the “Is LessWrong for you?” section, and it told me that LessWrong was likely for me. I read the “well-kept garden” post in the past and found myself agreeing with its message. This is why I felt mislead and why I don’t think linking these two sections makes for a good counter-argument (after all, I attempted to communicate that I had already taken them into account). I thought LW should take responsibility for what it told me, as trusting it is what got me rate-limited. That’s the core message, the rest of my reply just defends my approach of commenting.
In order not to be misunderstood completely, I’d need a disclaimer like this at the top of every comment I make, which is clearly not feasible:
Humanity is somewhat rational now, but our shared knowledge is still filled with old errors which were made before we learned how to think. Many core assumptions are just wrong. But if these beliefs are corrected, then the cascade would collapse some of the beliefs that people hold dear, or touch upon controversial subjects. The truth doesn’t stand a chance against politics, morality and social norms. Sadly, if you want to prevent society from collapsing, you will need to grapple a bit with these three subjects. But that will very likely lead to downvotes.
A lot of things are poorly explained, but nonetheless true. Other things are very well argued, but nonetheless false. “Manifesting the future by visualizing it” is pseudoscience, but it has a positive utility. “We must make new laws to keep everyone safe” sounds reasonable, but after 1000 iterations it should have dawned on us that the 1001th law isn’t going to save us. I think that the reasonable sentence would net you positive karma on here, while the pseudoscience would get called worthless.
My logical intelligence is much higher than my verbal—and most people who are successful in social and academic areas of life are the complete opposite. Nonetheless, some of us can see patterns that other people just can’t. Human beings also have a lot in common with AI, we’re blackboxes. Our instincts are discriminatory and biased, but only because people who weren’t went extinct. Those who attempt to get rid of biases should first know what they are good for (Chesterton’s fence). But I can’t see a single movement in society advocating for change which actually understands what it’s doing. But people don’t like hearing this.
As of right now, the blackbox (intuition, instinct, etc) is still smarter than the explainable truth. This will change as people are taught how to disregard the blackbox and even break it. But this also goes against the consensus (in a way that I assume it will be considered “bad quality”. Some people might upvote what they disagree with, but I don’t think that goes for many types of disagreement)
And I’m also only human. Rate-limited users are perhaps the bottom 5% of posters? But I’m above that. I’m just grappling with subjects which are beyond my level. You told me to read the rules, that’s a lot easier. I could also get lots of upvotes if I engaged with subjects that I’m overqualified for. But like with AGI, some subjects are beyond our abilities, but I don’t think we can’t afford to ignore them, so we’re forced to make fools of ourselves trying.
Automatic rate-limiting only uses the last 20 posts and comments, which can still be relatively harsh, but 99 good comments will definitely outweigh one poor comment.