I’ve previously talked about how I think Less Wrong’s culture seems to be on a gradual trajectory towards posting less stuff and posting it in less visible places. For example, six years ago a post like this qualified as a featured post in Main. Nowadays it’s the sort of thing that would go in an Open Thread. Vaniver’s recent discussion post is the kind of thing that would have been a featured Main post in 2010.
Less Wrong is one of the few forums on the internet that actually discourages posting content. This is a feature of the culture that manifests in several ways:
One of the first posts on the site explained why it’s important to downvote people. The post repeatedly references experiences with Usenet to provide support for this. But I think the internet has evolved a lot since Usenet. Subtle site mechanics have the potential to affect the culture of your community a lot. (I don’t think it’s a coincidence that Tumblr and 4chan have significantly different site mechanics and also significantly different cultures and even significantly different politics. Tumblr’s “replies go to the writer’s followers” mechanic leads to a concern with social desirability that 4chan’s anonymity totally lacks.)
On reddit, if your submission is downvoted, it’s downvoted in to obscurity. On Less Wrong, downvoted posts remain on the Discussion page, creating a sort of public humiliation for people who are downvoted.
The Main/Discussion/Open Thread distinction invites snippy comments about whether your thing would have been more appropriate for some other tier. On most social sites, readers decide how much visibility a post should get (by upvoting, sharing, etc.) Less Wrong is one of the few that leaves it down to the writer. This has advantages and disadvantages. One advantage is that important but boring scholarly work can get visibility more easily.
Upvotes substitute for praise: instead of writing “great post” type comments, readers will upvote you, which is less of a motivator.
My experience of sitting down to write a Less Wrong post is as follows:
I have some interesting idea for a Less Wrong post. I sit down and excitedly start writing it out.
A few paragraphs in, I think of some criticism of my post that users are likely to make. I try to persevere for a while anyway.
Within an hour, I have thought of so many potential criticisms or reasons that my post might come across as lame that I am totally demoralized. I save my post as a draft, close the tab, and never return to it.
Contrast the LW model with the “conversational blogging” model where you sit down, scribble some thoughts out, hit post, and see what your readers think. Without worrying excessively about what readers think, you’re free to write in open mode and have creative ideas you wouldn’t have when you’re feeling self-critical.
Anyway, now that I’ve described the problem, here are some offbeat solution ideas:
LW users move away from posting on LW and post on Medium.com instead. There aren’t upvotes or downvotes, so there’s little fear of being judged. Bad posts are “punished” by being ignored, not downvoted. And Medium.comgives you a built-in audience so you don’t need to build up a following the way you would with an independent blog. (I haven’t actually used Medium.com that much; maybe it has problems.)
The EA community pays broke postdocs to create peer-reviewed, easily understandable blog posts on topics of interest to the EA community at large (e.g. an overview of the literature on how to improve the quality of group discussions, motivation hacking, rationality stuff, whatever). This goes on its own site. After establishing a trusted brand, we could branch out in to critiquing science journalism in order to raise the sanity waterline or other cool stuff like that.
Someone makes it their business to read everything gets written on every blog in the EA-sphere and create a “Journal of Effective Altruism” that’s a continually updated list of links to the very best writing in the EA-sphere. This gives boring scholarly stuff a chance to get high visibility. This “Editor-in-Chief” figure could also provide commentary, link to related posts that they remember, etc. I’ll bet it wouldn’t be more than a part-time job. Ideally it would be a high status, widely trusted person in the EA community who has a good memory for related ideas.
Some of these are solutions that make more sense if the EA movement grows significantly beyond its current scope, but it can’t hurt to start kicking them around.
The top tier quality for actually read posting is dominated by one individual (a great one, but still)
Are we talking about LW proper here? Arguably this has been true over a good chunk of the site’s history: at one time it was Eliezer, then Yvain, then Lukeprog, etc.
Within an hour, I have thought of so many potential criticisms or reasons that my post might come across as lame that I am totally demoralized. I save my post as a draft, close the tab, and never return to it.
It doesn’t help that even the most offhand posting is generally treated as if it was an academic paper and reviewed skewered accordingly :-p.
Agreed. This is, for me, one of the main advantages of posting on tumblr. You still get the feedback you want from clever people and criticism, but that criticism doesn’t feel quite as bad as it would here, because everyone realizes that tumblr is a good space to test and try out ideas. Less Wrong feels, to me, more like a place where you share more solidified ideas (with the Open Thread as a possible exception).
Vaniver’s recent discussion post is the kind of thing that would have been a featured Main post in 2010.
I will point out that I didn’t put that in Main (which is where I target the majority of the post-style content I create) because I think the first paragraph is the only ‘interesting’ part of that post, and it’s a fairly straightforward idea, and the primary example was already written about by Eliezer, twice.
Within an hour, I have thought of so many potential criticisms or reasons that my post might come across as lame that I am totally demoralized. I save my post as a draft, close the tab, and never return to it.
This is a more serious issue, which was actually pretty crippling with the aforementioned discussion post—but that was mostly because it was a post telling people “you can’t tell people things they don’t know.” (Yes, there’s the consolation that you can explain things to people, but did I really want to put in the effort to explain that?)
Proposals for making LW upvote-only emerge every few months, most recently during the retributive downvoting fiasco. I said then, and I continue to believe now, that it’s a terrible idea.
JMIV is right to say in the ancestor that subtle features of moderation mechanics have outsized effects on community culture; I even agree with him that Eliezer voiced an unrealistically rosy view of the downvote in “Well-Kept Gardens”. But upvote-only systems have their own pitfalls, and quite severe ones. The reasons behind them are somewhat complex, but boil down to bad incentives.
Imagine posting as a game scored in utility. Upvotes gain you utility; downvotes lose you it; and for most people being downvoted costs you more than being upvoted gains you, though the exact ratio varies from person to person. You want to maximize your utility, and you have a finite amount of time to spend on it. If you spend that time researching new content to post, your output is low but it’s very rarely downvoted. Debate takes a moderate amount of time; votes on debate are less reliable, especially if you’re arguing for something like neoreaction or radical feminism or your own crackpot views on time and dimension, but you’re all but guaranteed upvotes from people that agree with you. Plus telling people they’re wrong is fun, so you get some bonus utility. Finally, you can post cat pictures, which takes almost no time, will score a few upvotes from people that like looking at their little jellybean toes, but violates content norms.
Which one of these is optimal changes, depending on how tolerant you are of downvoting and how good you are at dodging it. But while removing the downvote option incentivizes all three (which is why social media likes it), it should be clear that it incentivizes the last two much more. You can see the fruits of this on Facebook groups, that site’s closest analogy to what’s being proposed here. (Tumblr, and Facebook user pages, are also upvote-only in practice, but their sharing and friending mechanisms make them harder to analyze in these terms.)
He isn’t suggesting making LW upvote only. Just a creating a new section of it that is upvote only. And why not? If you’re right the evidence will bear out that it is a terrible system. But we won’t know until we test the idea.
An earlier version of my comment read “LW or parts of it”. Edited it out for stylistic reasons and because I assumed the application to smaller domains would be clear enough in context. Guess I was wrong.
Granted, not everything I said would apply to the first proposal, the one where top-level posts are upvote-only but comments aren’t. That’s a little more interesting; I’m still leery of it but I haven’t fully worked out the incentives.
As to empirics, one thing we’re not short on is empirical data from other forums. We’re not so exceptional that the lessons learned from them can’t be expected to apply.
Apologies if that seemed like nitpick (which I try to avoid). I thought it was relevant because even if you are right, trying out the new system wouldn’t mean making LessWrong terrible, it would just mean making a small part of LessWrong terrible (which we could then get rid of). The cost is so small so that I don’t see why its shouldn’t be tried.
I think the cost is higher than you’re giving it credit for. Securing dev time to implement changes around here is incredibly hard, at least if you aren’t named Eliezer, and changes anywhere are usually harder to back out than they are to put in; we can safely assume that any change we manage to push through will last for months, and forever is probably more likely.
Hacker News has a downvote, but you need to have 500 karma to use it. This keeps it from being used too often, and only by people very familiar with the community culture. Stackoverflow allows anyone to downvote, but you have to spend your own karma, to discourage it.
HN also hides the votes that comments have. And reddit has been moving to this policy as well.
Imagine posting as a game scored in utility. Upvotes gain you utility; downvotes lose you it
That’s exactly my problem with reddit-style voting in general. Human communication, even in an impoverished medium such as forum posting, is highly, highly complex and pluridimensional. Plus one and minus one don’t even begin to cover it. Even when the purpose is a quick and informal moderation system. Good post on a wholly uninteresting topic? Good ideas once you get past the horrendous spelling? One-line answers? Interesting but highly uncertain info? Excessive posting volume? The complete lack of an answer where one would have been warranted? Strong (dis)approval looking just like mild (dis)approval? Sometimes it’s difficult to vote.
Besides, the way it is set up, the system implicitly tells people that everyone’s opinion is valid, and equally valid at that. Good for those who desire democracy in everything, but socially and psychologically not accurate. Some lurker’s downvote can very well cancel out EY’s upvote, for instance, and you’ll never know. Maybe some sort of weighted karma system would work better, wherein votes would count more according to a combination of the voter’s absolute karma and positive karma percentage.
To address your specific concerns about upvote-only systems, positive feedback expressed verbally may be boring to read and to write, hence reducing it to a number, but negative feedback expressed silently through downvotes leaves you wondering what the hell is wrong with your post and according to who. As long as people can still reply to each other, posters of cat pictures can still be disapproved of, even without downvotes. And perhaps the criticism may stick more if there are words to “haunt” you rather than an abstract minus one.
However, this one strongly depends on community norms. If the default is approval, then the upvote is the cheap signal and a downvote-only system can in fact work better. If the default is disapproval, then the downvote is a cheap signal. An upvote-only policy works best in a significantly more hostile environment.
Other. I do not think there is a need for a new section. Instead, we could encourage people to use tags (e.g. something like these belief tags) and put disclaimers at the top of their posts. Even though actual tags aren’t very easy to notice, we can use “informal tags”, such as, e.g. putting a tag in square brackets.
For example, if you want to post your unpolished idea, your post could be titled something like this: “A Statement of idea [Epistemic state: speculation] [Topic:Something]” or “A Statement of idea [Epistemic state: possible] [Topic:Something]” or “A Statement of idea [Epistemic state: a very rough draft][Topic:Something]”. In addition to that you could put a disclaimer at the top of your post. Perhaps such clarity would make it somewhat easier to be somewhat more lenient on unpolished ideas, because even if a reader can see that the poster intended their post to be a rough draft with many flaws, they cannot be sure if that draft being highly upvoted won’t be taken by another reader as a sign that this post is correct and flawless (or at least thought as such by a lot of LWers), thus sending the wrong message. If a poster made it clear that they merely explore the curious idea, an interesting untested model or something that has only a remote possibility of not being not even wrong, a reader would be able to upvote or downvote a post based on what the post was trying to achieve, since there would be less need to signal other readers that a post has serious flaws, and therefore should not be believed, if it was already tagged as “unlikely” or something like that.
Perhaps, numerical values to indicate the belief status (e.g. [0.3]) could be used instead of words.
There would still be an incentive to tag your posts as “certain” or “highly likely”, because most likely they would be treated as having more credibility and thus attract more readers.
Another approach would be not allowing downvote to be open to all users. On the Stackexchage network for example, you need a certain amount of reputation to downvote someone. I’d bet that a very large majority of the discouraging/unnecessary/harmful downvotes come from users who don’t have above, say, 5-15 karma in the last month. Perhaps official downvote policies messaged to a user the first time they pass that would help too.
This way involved users can still downvote bad posts, and the bulk of the problem is solved.
But it requires technical work, which may be an issue.
Perhaps official downvote policies messaged to a user the first time they pass that would help too.
Anything with messages could be implemented by a bot account, right? That could be made without having to change the Less Wrong code itself.
Maybe we could send a message to users with guidelines on downvoting every time they downvote something? This would gently discourage heavy and/or poorly reasoned downvoting, likely without doing too much damage to the kind of downvoting we want. One issue with this is it would likely be very difficult or practically impossible for a bot account to know when someone downvotes something without changing the LW code. (Though it probably wouldn’t require a very big change, and things could be limited to just the bot account(s).)
Every time someone downvotes would probably be too much, but maybe the first time, or if we restrict downvotes only for users with some amount of karma then when they hit that level of karma?
Would you be willing to run a survey on Discussion also about Main being based on upvotes instead of a mix of self-selection and moderation? As well as all ideas that seem interesting to you that people suggest here?
There could be a research section, a Upvoted section and a discussion section, where the research section is also displayed within the upvoted, trending one.
On second thought, I’ll risk it. (I might post a comment to it with a compilation of my ideas and my favorites of others’ ideas, but it might take me a while.)
Would you be willing to run a survey on Discussion also about Main being based on upvotes instead of a mix of self-selection and moderation? As well as all ideas that seem interesting to you that people suggest here?
I’d rather not expose myself to the potential downvotes of a full Discussion post, and I also don’t know how to put polls in full posts, only in comments. Nonetheless I am pretty pro-poll in general and I’ll try to include more of them with my ideas.
Contrast the LW model with the “conversational blogging” model where you sit down, scribble some thoughts out, hit post, and see what your readers think. Without worrying excessively about what readers think, you’re free to write in open mode and have creative ideas you wouldn’t have when you’re feeling self-critical.
I don’t know if I’ve ever read the following from an original source (i.e., Eliezer or Scott), but when people ask “why do those guys no longer post on Less Wrong?”, the common response I get from their personal friends in the Bay Area, or wherever, and the community at large, is apparently, however justified or not, the worry their posts would be overly criticized by posts is what drove them off Less Wrong for fairer pastures where their ideas wouldn’t need pass through a crucible of (possibly motivated) skepticism before valued or spread.
Which shows that a bug to some people is a feature to others.
A lot of posts, including in the Sequences, have really good criticisms in the comments. (For that matter, a lot of SSC posts have really good criticisms in the comments, which Scott usually just ignores.) I can easily understand why people don’t like reading criticism, but if you’re posting for the ideas, some criticism should be expected.
I have some interesting idea for a Less Wrong post. I sit down and excitedly start writing it out.
The key point seems to be not to aim for Main if you have some creative idea. Most creative ideas fail. That doesn’t mean they were bad ideas. Just that creativity doesn’t work like safe success. Main is for a specific audience and requires a specific class of writers. Why not aim for Discussion or Open Thread? Yes, these are tiers and maybe a more smooth transition were nicer but as it is that works fine.
Within an hour, I have thought of so many potential criticisms or reasons that my post might come across as lame that I am totally demoralized. I save my post as a draft, close the tab, and never return to it.
This. My standard for what I would post on LW eventually just became too high—higher than what I would post on my own blog, and beyond justifiable effort.
This comment is great. Please cross-post the suggestions for effective altruism especially to the Effective Altruism Forum. If you don’t, do you mind if I do?
Thanks! I already linked to my comment from the EA forum. If you want to signal-boost it further, maybe put a link to it and/or a summary of my suggestions in the EA Facebook group? By the way, I’m planning to write a longer post fleshing out the idea of peer-reviewed blog posts at some point.
I’ve previously talked about how I think Less Wrong’s culture seems to be on a gradual trajectory towards posting less stuff and posting it in less visible places. For example, six years ago a post like this qualified as a featured post in Main. Nowadays it’s the sort of thing that would go in an Open Thread. Vaniver’s recent discussion post is the kind of thing that would have been a featured Main post in 2010.
Less Wrong is one of the few forums on the internet that actually discourages posting content. This is a feature of the culture that manifests in several ways:
One of the first posts on the site explained why it’s important to downvote people. The post repeatedly references experiences with Usenet to provide support for this. But I think the internet has evolved a lot since Usenet. Subtle site mechanics have the potential to affect the culture of your community a lot. (I don’t think it’s a coincidence that Tumblr and 4chan have significantly different site mechanics and also significantly different cultures and even significantly different politics. Tumblr’s “replies go to the writer’s followers” mechanic leads to a concern with social desirability that 4chan’s anonymity totally lacks.)
On reddit, if your submission is downvoted, it’s downvoted in to obscurity. On Less Wrong, downvoted posts remain on the Discussion page, creating a sort of public humiliation for people who are downvoted.
The Main/Discussion/Open Thread distinction invites snippy comments about whether your thing would have been more appropriate for some other tier. On most social sites, readers decide how much visibility a post should get (by upvoting, sharing, etc.) Less Wrong is one of the few that leaves it down to the writer. This has advantages and disadvantages. One advantage is that important but boring scholarly work can get visibility more easily.
Upvotes substitute for praise: instead of writing “great post” type comments, readers will upvote you, which is less of a motivator.
My experience of sitting down to write a Less Wrong post is as follows:
I have some interesting idea for a Less Wrong post. I sit down and excitedly start writing it out.
A few paragraphs in, I think of some criticism of my post that users are likely to make. I try to persevere for a while anyway.
Within an hour, I have thought of so many potential criticisms or reasons that my post might come across as lame that I am totally demoralized. I save my post as a draft, close the tab, and never return to it.
Contrast the LW model with the “conversational blogging” model where you sit down, scribble some thoughts out, hit post, and see what your readers think. Without worrying excessively about what readers think, you’re free to write in open mode and have creative ideas you wouldn’t have when you’re feeling self-critical.
Anyway, now that I’ve described the problem, here are some offbeat solution ideas:
LW users move away from posting on LW and post on Medium.com instead. There aren’t upvotes or downvotes, so there’s little fear of being judged. Bad posts are “punished” by being ignored, not downvoted. And Medium.com gives you a built-in audience so you don’t need to build up a following the way you would with an independent blog. (I haven’t actually used Medium.com that much; maybe it has problems.)
The EA community pays broke postdocs to create peer-reviewed, easily understandable blog posts on topics of interest to the EA community at large (e.g. an overview of the literature on how to improve the quality of group discussions, motivation hacking, rationality stuff, whatever). This goes on its own site. After establishing a trusted brand, we could branch out in to critiquing science journalism in order to raise the sanity waterline or other cool stuff like that.
Someone makes it their business to read everything gets written on every blog in the EA-sphere and create a “Journal of Effective Altruism” that’s a continually updated list of links to the very best writing in the EA-sphere. This gives boring scholarly stuff a chance to get high visibility. This “Editor-in-Chief” figure could also provide commentary, link to related posts that they remember, etc. I’ll bet it wouldn’t be more than a part-time job. Ideally it would be a high status, widely trusted person in the EA community who has a good memory for related ideas.
Some of these are solutions that make more sense if the EA movement grows significantly beyond its current scope, but it can’t hurt to start kicking them around.
Are we talking about LW proper here? Arguably this has been true over a good chunk of the site’s history: at one time it was Eliezer, then Yvain, then Lukeprog, etc.
It doesn’t help that even the most offhand posting is generally treated as if it was an academic paper and reviewed skewered accordingly :-p.
I agree. There are definitely times for unfiltered criticism, but most people require a feeling of security to be their most creative.
I believe this is referred to as “psychological safety” in the brainstorming literature, for whatever that’s worth.
Agreed. This is, for me, one of the main advantages of posting on tumblr. You still get the feedback you want from clever people and criticism, but that criticism doesn’t feel quite as bad as it would here, because everyone realizes that tumblr is a good space to test and try out ideas. Less Wrong feels, to me, more like a place where you share more solidified ideas (with the Open Thread as a possible exception).
I will point out that I didn’t put that in Main (which is where I target the majority of the post-style content I create) because I think the first paragraph is the only ‘interesting’ part of that post, and it’s a fairly straightforward idea, and the primary example was already written about by Eliezer, twice.
This is a more serious issue, which was actually pretty crippling with the aforementioned discussion post—but that was mostly because it was a post telling people “you can’t tell people things they don’t know.” (Yes, there’s the consolation that you can explain things to people, but did I really want to put in the effort to explain that?)
Is anyone in favor of creating a new upvote-only section of LW?
[pollid:988]
Proposals for making LW upvote-only emerge every few months, most recently during the retributive downvoting fiasco. I said then, and I continue to believe now, that it’s a terrible idea.
JMIV is right to say in the ancestor that subtle features of moderation mechanics have outsized effects on community culture; I even agree with him that Eliezer voiced an unrealistically rosy view of the downvote in “Well-Kept Gardens”. But upvote-only systems have their own pitfalls, and quite severe ones. The reasons behind them are somewhat complex, but boil down to bad incentives.
Imagine posting as a game scored in utility. Upvotes gain you utility; downvotes lose you it; and for most people being downvoted costs you more than being upvoted gains you, though the exact ratio varies from person to person. You want to maximize your utility, and you have a finite amount of time to spend on it. If you spend that time researching new content to post, your output is low but it’s very rarely downvoted. Debate takes a moderate amount of time; votes on debate are less reliable, especially if you’re arguing for something like neoreaction or radical feminism or your own crackpot views on time and dimension, but you’re all but guaranteed upvotes from people that agree with you. Plus telling people they’re wrong is fun, so you get some bonus utility. Finally, you can post cat pictures, which takes almost no time, will score a few upvotes from people that like looking at their little jellybean toes, but violates content norms.
Which one of these is optimal changes, depending on how tolerant you are of downvoting and how good you are at dodging it. But while removing the downvote option incentivizes all three (which is why social media likes it), it should be clear that it incentivizes the last two much more. You can see the fruits of this on Facebook groups, that site’s closest analogy to what’s being proposed here. (Tumblr, and Facebook user pages, are also upvote-only in practice, but their sharing and friending mechanisms make them harder to analyze in these terms.)
He isn’t suggesting making LW upvote only. Just a creating a new section of it that is upvote only. And why not? If you’re right the evidence will bear out that it is a terrible system. But we won’t know until we test the idea.
An earlier version of my comment read “LW or parts of it”. Edited it out for stylistic reasons and because I assumed the application to smaller domains would be clear enough in context. Guess I was wrong.
Granted, not everything I said would apply to the first proposal, the one where top-level posts are upvote-only but comments aren’t. That’s a little more interesting; I’m still leery of it but I haven’t fully worked out the incentives.
As to empirics, one thing we’re not short on is empirical data from other forums. We’re not so exceptional that the lessons learned from them can’t be expected to apply.
Apologies if that seemed like nitpick (which I try to avoid). I thought it was relevant because even if you are right, trying out the new system wouldn’t mean making LessWrong terrible, it would just mean making a small part of LessWrong terrible (which we could then get rid of). The cost is so small so that I don’t see why its shouldn’t be tried.
I think the cost is higher than you’re giving it credit for. Securing dev time to implement changes around here is incredibly hard, at least if you aren’t named Eliezer, and changes anywhere are usually harder to back out than they are to put in; we can safely assume that any change we manage to push through will last for months, and forever is probably more likely.
Hacker News has a downvote, but you need to have 500 karma to use it. This keeps it from being used too often, and only by people very familiar with the community culture. Stackoverflow allows anyone to downvote, but you have to spend your own karma, to discourage it.
HN also hides the votes that comments have. And reddit has been moving to this policy as well.
That’s exactly my problem with reddit-style voting in general. Human communication, even in an impoverished medium such as forum posting, is highly, highly complex and pluridimensional. Plus one and minus one don’t even begin to cover it. Even when the purpose is a quick and informal moderation system. Good post on a wholly uninteresting topic? Good ideas once you get past the horrendous spelling? One-line answers? Interesting but highly uncertain info? Excessive posting volume? The complete lack of an answer where one would have been warranted? Strong (dis)approval looking just like mild (dis)approval? Sometimes it’s difficult to vote.
Besides, the way it is set up, the system implicitly tells people that everyone’s opinion is valid, and equally valid at that. Good for those who desire democracy in everything, but socially and psychologically not accurate. Some lurker’s downvote can very well cancel out EY’s upvote, for instance, and you’ll never know. Maybe some sort of weighted karma system would work better, wherein votes would count more according to a combination of the voter’s absolute karma and positive karma percentage.
To address your specific concerns about upvote-only systems, positive feedback expressed verbally may be boring to read and to write, hence reducing it to a number, but negative feedback expressed silently through downvotes leaves you wondering what the hell is wrong with your post and according to who. As long as people can still reply to each other, posters of cat pictures can still be disapproved of, even without downvotes. And perhaps the criticism may stick more if there are words to “haunt” you rather than an abstract minus one.
However, this one strongly depends on community norms. If the default is approval, then the upvote is the cheap signal and a downvote-only system can in fact work better. If the default is disapproval, then the downvote is a cheap signal. An upvote-only policy works best in a significantly more hostile environment.
Other. I do not think there is a need for a new section. Instead, we could encourage people to use tags (e.g. something like these belief tags) and put disclaimers at the top of their posts. Even though actual tags aren’t very easy to notice, we can use “informal tags”, such as, e.g. putting a tag in square brackets.
For example, if you want to post your unpolished idea, your post could be titled something like this: “A Statement of idea [Epistemic state: speculation] [Topic:Something]” or “A Statement of idea [Epistemic state: possible] [Topic:Something]” or “A Statement of idea [Epistemic state: a very rough draft][Topic:Something]”. In addition to that you could put a disclaimer at the top of your post. Perhaps such clarity would make it somewhat easier to be somewhat more lenient on unpolished ideas, because even if a reader can see that the poster intended their post to be a rough draft with many flaws, they cannot be sure if that draft being highly upvoted won’t be taken by another reader as a sign that this post is correct and flawless (or at least thought as such by a lot of LWers), thus sending the wrong message. If a poster made it clear that they merely explore the curious idea, an interesting untested model or something that has only a remote possibility of not being not even wrong, a reader would be able to upvote or downvote a post based on what the post was trying to achieve, since there would be less need to signal other readers that a post has serious flaws, and therefore should not be believed, if it was already tagged as “unlikely” or something like that.
Perhaps, numerical values to indicate the belief status (e.g. [0.3]) could be used instead of words.
There would still be an incentive to tag your posts as “certain” or “highly likely”, because most likely they would be treated as having more credibility and thus attract more readers.
Another approach would be not allowing downvote to be open to all users. On the Stackexchage network for example, you need a certain amount of reputation to downvote someone. I’d bet that a very large majority of the discouraging/unnecessary/harmful downvotes come from users who don’t have above, say, 5-15 karma in the last month. Perhaps official downvote policies messaged to a user the first time they pass that would help too.
This way involved users can still downvote bad posts, and the bulk of the problem is solved.
But it requires technical work, which may be an issue.
Anything with messages could be implemented by a bot account, right? That could be made without having to change the Less Wrong code itself.
Maybe we could send a message to users with guidelines on downvoting every time they downvote something? This would gently discourage heavy and/or poorly reasoned downvoting, likely without doing too much damage to the kind of downvoting we want. One issue with this is it would likely be very difficult or practically impossible for a bot account to know when someone downvotes something without changing the LW code. (Though it probably wouldn’t require a very big change, and things could be limited to just the bot account(s).)
[pollid:989]
Every time someone downvotes would probably be too much, but maybe the first time, or if we restrict downvotes only for users with some amount of karma then when they hit that level of karma?
Would you be willing to run a survey on Discussion also about Main being based on upvotes instead of a mix of self-selection and moderation? As well as all ideas that seem interesting to you that people suggest here?
There could be a research section, a Upvoted section and a discussion section, where the research section is also displayed within the upvoted, trending one.
On second thought, I’ll risk it. (I might post a comment to it with a compilation of my ideas and my favorites of others’ ideas, but it might take me a while.)
I’d rather not expose myself to the potential downvotes of a full Discussion post, and I also don’t know how to put polls in full posts, only in comments. Nonetheless I am pretty pro-poll in general and I’ll try to include more of them with my ideas.
Another suggestion. Every downvote costs a point of your own karma. You must have positive karma to downvote.
Another suggestion: Every downvote costs a point of your own karma.
I don’t know if I’ve ever read the following from an original source (i.e., Eliezer or Scott), but when people ask “why do those guys no longer post on Less Wrong?”, the common response I get from their personal friends in the Bay Area, or wherever, and the community at large, is apparently, however justified or not, the worry their posts would be overly criticized by posts is what drove them off Less Wrong for fairer pastures where their ideas wouldn’t need pass through a crucible of (possibly motivated) skepticism before valued or spread.
Which shows that a bug to some people is a feature to others.
A lot of posts, including in the Sequences, have really good criticisms in the comments. (For that matter, a lot of SSC posts have really good criticisms in the comments, which Scott usually just ignores.) I can easily understand why people don’t like reading criticism, but if you’re posting for the ideas, some criticism should be expected.
All true.
The key point seems to be not to aim for Main if you have some creative idea. Most creative ideas fail. That doesn’t mean they were bad ideas. Just that creativity doesn’t work like safe success. Main is for a specific audience and requires a specific class of writers. Why not aim for Discussion or Open Thread? Yes, these are tiers and maybe a more smooth transition were nicer but as it is that works fine.
This. My standard for what I would post on LW eventually just became too high—higher than what I would post on my own blog, and beyond justifiable effort.
This comment is great. Please cross-post the suggestions for effective altruism especially to the Effective Altruism Forum. If you don’t, do you mind if I do?
Thanks! I already linked to my comment from the EA forum. If you want to signal-boost it further, maybe put a link to it and/or a summary of my suggestions in the EA Facebook group? By the way, I’m planning to write a longer post fleshing out the idea of peer-reviewed blog posts at some point.
I think he’s only talking about Slate Star Codex.