For what it’s worth, I think some of those terrible ideas are great or close to great.
In particular:
Hire a team of well-paid moderators for a three-month high-effort experiment of responding to every bad comment with a fixed version of what a good comment making the same point would have looked like. Flood the site with training data.
Make a fork of LessWrong run by me, or some other hopeless idealist that still thinks that there might be something actually good that we can get if we actually do the thing (but not if we don’t).
Create an anonymous account with special powers called TheCultureCurators or something, and secretly give the login credentials to a small cadre of 3-12 people with good judgment and mutual faith in one another’s good judgment. Give TheCultureCurators the ability to make upvotes and downvotes of arbitrary strength, or to add notes to any comment or post à la Google Docs, or to put a number on any comment or post that indicates what karma TheCultureCurators believe that post should have.
Rob Bensinger wants me to note that he agrees.
The first one would be costly and annoying to lots of people but also time boxed and super interesting. Training data is really good, and very pedagogically valuable.
The second one just seems low cost to everyone except the idealist, so if they’re willing, great!
The third would be controversial and complicated, but for instance putting a number for what karma they think it should have wouldn’t change the current voting system and add information and could be time boxed like the first one.
Mostly I appreciate just the generation of lots of ideas to give my brain more to chew on and a sense that bigger things are possible.
Also more generally I really resonate with “dear God, I need the other people around me to be good at this to be my best self.”
I like it, but not as much as I like two-axis proposals, which I think can be done with a smooth enough UI that they don’t impose a burden.
With something like the below, you can click to weak vote and hold to strong vote, just like we currently do, and can in one click express each of the four following positions:
I like/agree with this point, and furthermore think it’s being expressed correctly/is in line with norms of reasoning and discourse I want to see more of on LW (dark blue)
I like/agree with this point, but I want to note objection with how it’s being expressed/have reservations about whether it’s good rationality or good discourse (pale orange)
I dislike/disagree with this point, and want to note objection with how it’s being expressed (dark orange)
I dislike/disagree with this point, but want to endorse/support the way it was arrived at and expressed (pale blue)
For what it’s worth, I think some of those terrible ideas are great or close to great.
In particular:
Hire a team of well-paid moderators for a three-month high-effort experiment of responding to every bad comment with a fixed version of what a good comment making the same point would have looked like. Flood the site with training data.
Make a fork of LessWrong run by me, or some other hopeless idealist that still thinks that there might be something actually good that we can get if we actually do the thing (but not if we don’t).
Create an anonymous account with special powers called TheCultureCurators or something, and secretly give the login credentials to a small cadre of 3-12 people with good judgment and mutual faith in one another’s good judgment. Give TheCultureCurators the ability to make upvotes and downvotes of arbitrary strength, or to add notes to any comment or post à la Google Docs, or to put a number on any comment or post that indicates what karma TheCultureCurators believe that post should have.
Rob Bensinger wants me to note that he agrees.
The first one would be costly and annoying to lots of people but also time boxed and super interesting. Training data is really good, and very pedagogically valuable.
The second one just seems low cost to everyone except the idealist, so if they’re willing, great!
The third would be controversial and complicated, but for instance putting a number for what karma they think it should have wouldn’t change the current voting system and add information and could be time boxed like the first one.
Mostly I appreciate just the generation of lots of ideas to give my brain more to chew on and a sense that bigger things are possible.
Also more generally I really resonate with “dear God, I need the other people around me to be good at this to be my best self.”
I’m curious what you and others think of Raelfin’s post about the karma system: https://www.lesswrong.com/posts/xN2sHnLupWe4Tn5we/improving-on-the-karma-system
I like it, but not as much as I like two-axis proposals, which I think can be done with a smooth enough UI that they don’t impose a burden.
With something like the below, you can click to weak vote and hold to strong vote, just like we currently do, and can in one click express each of the four following positions:
I like/agree with this point, and furthermore think it’s being expressed correctly/is in line with norms of reasoning and discourse I want to see more of on LW (dark blue)
I like/agree with this point, but I want to note objection with how it’s being expressed/have reservations about whether it’s good rationality or good discourse (pale orange)
I dislike/disagree with this point, and want to note objection with how it’s being expressed (dark orange)
I dislike/disagree with this point, but want to endorse/support the way it was arrived at and expressed (pale blue)
Oh yeah, I’ve seen you post this before, I liked it!