Some thoughts on resource bottlenecks and strategy.
There’s a lot I like about the set of goals Duncan is aiming for here, and IMO the primary question is one of prioritization.
I do think some high-level things have changed since 2018-or-so. Back when I wrote Meta-tations on Moderation, the default outcome was that LW withered and died, and it was really important people move from FB to LW. Nowadays, LW seems broadly healthy, the team has more buy-in, and I think it’s easier to do highly opinionated moderation more frequently for various reasons.
On the other hand, we did just recently refactor the LW team into Lightcone Infrastructure. Most of the team is now working on a broader project of “figure out the most important bottlenecks facing humanity’s ability to coordinate on x-risk, and building things that fix that bottleneck” (involving lots of pivoting). Ruby is hiring more people to build more capacity on the LW team but hiring well is a slow process. And most of the plans that seem to accomplish (some version of) what Duncan is pointing to here seem really expensive.
The good news is that we’re not money-constrained much these days. The biggest bottlenecked resource is team-attention. When I imagine the “hire a bunch of moderators to full-time respond to every single comment” (not an inherently crazy idea IMO), the bottleneck is vetting, hiring, training and managing those moderators.
I do think “which standards exactly, and what are they aiming for?” is a key question. A subproblem is that rigidly pushing for slightly misaligned standards is really infuriating and IMO drives people away from the site for reasons I don’t think are good. Part of reason I think hiring moderators is high effort is that I think a bad (or, “merely pretty good”) moderator can be really annoying and unhelpful.
I am pretty optimistic about technological solutions that don’t require scaling human attention (and I do think there’s a lot of low-hanging fruit there).
Brainstorming some ideas:
Users get better automated messaging about site norms as they first start posting. Duncan and Ruby both mentioned variants of this.
One option is that in order to start growing in vote power, a user has to read some stuff and pass some kind of multiple choice test about the site norms. (I might even make it so everyone temporarily loses the ability to Strong Vote until they’ve taken the test)
Checking if a user has gotten a moderate amount of net-downvotes recently (regardless of total karma), and some combo of flagging them for mod-attention, and giving them an automated “hey, it seems like something has been up with your commenting lately. You should reflect on that somehow [advice on how to do so, primary suggestion being to comment less frequently and more thoughtfully]. If it keeps up you may temporarily lose posting privileges.”
FB-style reacts that let people more easily give feedback about what’s wrong with something without replying to it.
Something that was previously seemed some-manner-of-cruxy between me and Duncan (but I’m not 100% sure about the flavor of the crux) is “LessWrong who’s primary job is to be a rationality dojo” vs “LessWrong who’s primary job is to output intellectual progress.”
Where, certainly, there’s good reason to think the Intellectual Progress machine might benefit from a rationality dojo embedded in it. But, that’s just one of the ideas for how to improve rate-of-intellectual progress. And my other background models point more towards other things as being more important for that.
BUT there is a particular model-update I’ve had that is new, which I haven’t gotten around to writing up yet. (This is less of a reply to Duncan and more to other people I’ve argued with over the years)
A key piece of my model is that a generative intellectual process looks very different from the finished output. It includes lots of leaps of intuition, inferential distance, etc. In order to get top-thinkers onto LW on a regular basis rather than in small private discords, it’s really important for them to be able to think-out-loud without being legible at every step here. And the LW team got a lot of complaints from good authors about LW being punishing about this in 2018.
But there’s a different problem which is that newcomers who haven’t yet gotten a lot of practice thinking deliberately/rationality, need to get that practice. If you show up at university, you basically write bad essays for 4 years and only your professor (who is paid) is obligated to read them.
And then, there is a blurry line between “metaphorical undergraduates who are still learning”, “metaphorical grad students (who write ‘real’ things but not always at high quality with good judgment”), and “metaphorical professors”.
In 2018, lots of people agreed LW was too nitpicky. But an update I made in late 2019 was that the solutions for metaphorical undergrads, grads, and professors might look pretty different. This probably has relationship with the preformal/formal/postformal distinction that Vaniver points at elsethread. And I think this lends itself to a reasonable operationalization of “who are the cool kids who are above the law?” (if one tried implementing something like that suggestion in Duncan’s OP)
So I now think it’s more reasonable to have new users basically expect to have all of their stuff critiqued about basic things.
(but – I still think it’s important to have a good model of what intellectual generativity requires for the critique to be useful, a fair amount of the time)
A further complication is “what’s up with metaphorical ‘grad students’” who sort of blur the line between on how much leeway it makes sense to give them. I think many past LW arguments about moderation also had a component of “who exactly are the students, grad students and professors here?”
None of this translates immediately into an obviously good process. But is part of the model of how I think such a process should get designed.
Strong agreement with this, assuming I’ve understood it. High confidence that it overlaps with what Vaniver laid out, and with my interpretation of what Ben was saying in the recent interaction I described under Vaniver’s comment.
EDIT: One clarification that popped up under a Vanvier subthread: I think the pendulum should swing more in the direction laid out in the OP. I do not think that the pendulum should swing all the way there, nor that “the interventions gestured at by the OP” are sufficient. Just that they’re something like necessary.
Small addition: LW 1.0 made it so you had to have 10 karma before making a top-level post (maybe just on Main? I don’t remember but probably you do). I think this probably matters a lot less now that new posts automatically have to be approved, and mods have to manually promote things to frontpage. But I don’t know, theoretically you could gate fraught discussions like the recent ones to users above a certain karma threshold? Some of the lowest-quality comments on those posts wouldn’t have happened in that case.
I guess where I’d like to see more moderator intervention would largely be in directing the conversation. For example, by creating threads for the community to discuss topics that you think it would be important for us to talk about.
Some thoughts on resource bottlenecks and strategy.
There’s a lot I like about the set of goals Duncan is aiming for here, and IMO the primary question is one of prioritization.
I do think some high-level things have changed since 2018-or-so. Back when I wrote Meta-tations on Moderation, the default outcome was that LW withered and died, and it was really important people move from FB to LW. Nowadays, LW seems broadly healthy, the team has more buy-in, and I think it’s easier to do highly opinionated moderation more frequently for various reasons.
On the other hand, we did just recently refactor the LW team into Lightcone Infrastructure. Most of the team is now working on a broader project of “figure out the most important bottlenecks facing humanity’s ability to coordinate on x-risk, and building things that fix that bottleneck” (involving lots of pivoting). Ruby is hiring more people to build more capacity on the LW team but hiring well is a slow process. And most of the plans that seem to accomplish (some version of) what Duncan is pointing to here seem really expensive.
The good news is that we’re not money-constrained much these days. The biggest bottlenecked resource is team-attention. When I imagine the “hire a bunch of moderators to full-time respond to every single comment” (not an inherently crazy idea IMO), the bottleneck is vetting, hiring, training and managing those moderators.
I do think “which standards exactly, and what are they aiming for?” is a key question. A subproblem is that rigidly pushing for slightly misaligned standards is really infuriating and IMO drives people away from the site for reasons I don’t think are good. Part of reason I think hiring moderators is high effort is that I think a bad (or, “merely pretty good”) moderator can be really annoying and unhelpful.
I am pretty optimistic about technological solutions that don’t require scaling human attention (and I do think there’s a lot of low-hanging fruit there).
Brainstorming some ideas:
Users get better automated messaging about site norms as they first start posting. Duncan and Ruby both mentioned variants of this.
One option is that in order to start growing in vote power, a user has to read some stuff and pass some kind of multiple choice test about the site norms. (I might even make it so everyone temporarily loses the ability to Strong Vote until they’ve taken the test)
Checking if a user has gotten a moderate amount of net-downvotes recently (regardless of total karma), and some combo of flagging them for mod-attention, and giving them an automated “hey, it seems like something has been up with your commenting lately. You should reflect on that somehow [advice on how to do so, primary suggestion being to comment less frequently and more thoughtfully]. If it keeps up you may temporarily lose posting privileges.”
FB-style reacts that let people more easily give feedback about what’s wrong with something without replying to it.
Something that was previously seemed some-manner-of-cruxy between me and Duncan (but I’m not 100% sure about the flavor of the crux) is “LessWrong who’s primary job is to be a rationality dojo” vs “LessWrong who’s primary job is to output intellectual progress.”
Where, certainly, there’s good reason to think the Intellectual Progress machine might benefit from a rationality dojo embedded in it. But, that’s just one of the ideas for how to improve rate-of-intellectual progress. And my other background models point more towards other things as being more important for that.
BUT there is a particular model-update I’ve had that is new, which I haven’t gotten around to writing up yet. (This is less of a reply to Duncan and more to other people I’ve argued with over the years)
A key piece of my model is that a generative intellectual process looks very different from the finished output. It includes lots of leaps of intuition, inferential distance, etc. In order to get top-thinkers onto LW on a regular basis rather than in small private discords, it’s really important for them to be able to think-out-loud without being legible at every step here. And the LW team got a lot of complaints from good authors about LW being punishing about this in 2018.
But there’s a different problem which is that newcomers who haven’t yet gotten a lot of practice thinking deliberately/rationality, need to get that practice. If you show up at university, you basically write bad essays for 4 years and only your professor (who is paid) is obligated to read them.
And then, there is a blurry line between “metaphorical undergraduates who are still learning”, “metaphorical grad students (who write ‘real’ things but not always at high quality with good judgment”), and “metaphorical professors”.
In 2018, lots of people agreed LW was too nitpicky. But an update I made in late 2019 was that the solutions for metaphorical undergrads, grads, and professors might look pretty different. This probably has relationship with the preformal/formal/postformal distinction that Vaniver points at elsethread. And I think this lends itself to a reasonable operationalization of “who are the cool kids who are above the law?” (if one tried implementing something like that suggestion in Duncan’s OP)
So I now think it’s more reasonable to have new users basically expect to have all of their stuff critiqued about basic things.
(but – I still think it’s important to have a good model of what intellectual generativity requires for the critique to be useful, a fair amount of the time)
A further complication is “what’s up with metaphorical ‘grad students’” who sort of blur the line between on how much leeway it makes sense to give them. I think many past LW arguments about moderation also had a component of “who exactly are the students, grad students and professors here?”
None of this translates immediately into an obviously good process. But is part of the model of how I think such a process should get designed.
Strong agreement with this, assuming I’ve understood it. High confidence that it overlaps with what Vaniver laid out, and with my interpretation of what Ben was saying in the recent interaction I described under Vaniver’s comment.
EDIT: One clarification that popped up under a Vanvier subthread: I think the pendulum should swing more in the direction laid out in the OP. I do not think that the pendulum should swing all the way there, nor that “the interventions gestured at by the OP” are sufficient. Just that they’re something like necessary.
Small addition: LW 1.0 made it so you had to have 10 karma before making a top-level post (maybe just on Main? I don’t remember but probably you do). I think this probably matters a lot less now that new posts automatically have to be approved, and mods have to manually promote things to frontpage. But I don’t know, theoretically you could gate fraught discussions like the recent ones to users above a certain karma threshold? Some of the lowest-quality comments on those posts wouldn’t have happened in that case.
I guess where I’d like to see more moderator intervention would largely be in directing the conversation. For example, by creating threads for the community to discuss topics that you think it would be important for us to talk about.