This article gave me a bunch of food for thought. I don’t think it addresses my main cruxes re: previous disagreements I’ve had with Duncan, but it definitely gave me some new ideas and new vantage points to view old ones.
(Note 1: I won’t be commenting on Duncan’s comments on Benquo’s comments because I’m still in the process with chatting with Benquo about it. I have a number of relevant disagreements with both Ben and Duncan, hope to resolve those disagreements at some point, but meanwhile don’t have the bandwidth I’d require to engage with both of them at once)
Some thoughts so far:
1. Hierarchy of Goals
The hierarchy of “purposes of LessWrong” that Duncan describes is roughly the same one I’d describe. A concern or difference in framing I have here is that several the of the stages reinforce each other in a cyclical fashion.
I’m not quite sure you can cleanly prioritize truth over truthseeking culture.
If our culture isn’t outputting useful accumulation of knowledge, then it’s failing at our core mission. Definitely. But in the situations where truthseeking-culture vs truth seem to be in conflict, I think it’s often because there’s a hard problem that defies easy answers. (see “Decoupled vs Contextual” and “Tensions in Truthseeking” for rough examples).
2. “Every Comment Gets Read, and acted on in some fashion”
I agree that this is an important goal worth striving for. I don’t think it’s achievable for the immediate future due to resource constraints, but useful to have in mind as the bar to measure against.
I think we’ve actually recently made a lot of progress on 80/20ing this – the new moderator sidebar forces someone to engage with all new posts, and all “at-risk” comments – those created by new users, or which have been downvoted or reported.
I liked Duncan’s proposed UI of “all un-read-by-moderators comments appear highlighted to mods”. I think it might be worth implementing something similar so that we gain a visceral sense of how close or far we currently are to “100% comment-check-coverage.”
3. Limited Resources, tool building
We do have pretty limited resources, and a fairly high bar for who we trust as a moderator (esp. at the level that Duncan is describing here).This is exacerbated because half the mods also being developers, so there is a direct tradeoff between building tools, and engaging in high bandwidth communication (as well as putting organizational capacity into finding more moderators we trust, vs, say, building out open source documentation).
So, for the immediate future I do lean towards solutions like “build tools that help make moderation as easy as possible”, rather than “make sure to fully engage with every comment that doesn’t uphold our standards.” (for example, right now there isn’t actually a tool that with one click, locks an entire subthread – I can block replies to an individual comment, I can lock all comments on a post, but I can’t lock all replies to all children of a comment, and this crimps the ability to put-things-in-stasis until a time when I have enough energy to moderate thoroughly.
3. Collegial Culture
I do think all things being equal, high-bandwidth communication is ideal. I like the general thrust of the approach described in the initial example:
Notice the details in the example above—they’re not random; most of them were put there deliberately and are doing important work. The phrase “appears to me to be” serves to highlight the critic’s awareness of uncertainty, that they may have misinterpreted things or missed detail. The framing of “thing we don’t do around here” is boundary-enforcing but not morally charged—it’s not that the behavior is fundamentally bad or wrong, just that it’s not part of ourspecific subcultural palette.
The phrase “if I’m understanding you correctly” foreshadows a crux—if I’m understanding you, then I believe X, but if my understanding changes, I might not believe X anymore. The invocation of a “standard rationality move” (such as applying reductionism, checking the inverse of the hypothesis, or setting a five-minute timer) reinforces the shared culture of the site and models better behavior for newcomers. And the “thoughts?” at the end—especially if it occurs within a context where such invitations are demonstrably genuine, and not just lip service—actively draws the other person back into the conversation. The sum total of all of these little touches turns what might otherwise be the beginning of a fight into a cooperative, collaborative dynamic.
This basically is how I’d prefer myself to engage with most comments that don’t live up to a standard, and insofar as I don’t communicate that way it’s (usually) because I’m either under time-pressure/stress/triggered, or perhaps more generally I haven’t processed the overall strategy into a S1-response that flows smoothly in high stakes situations.
I think Vaniver does this sort of thing better than me. It was helpful to me to have some of the details of this spelled out so that I could pay more attention to them.
The main potential disagreement I have here is the scale of intervention within a single conversation that Duncan is suggesting. I can definitely imagine this turning out to be important, but I can also easily imagine it turning out to derailing most conversations into meta commentary on themselves.
4. Operationalization...
The part where I periodically disagree with Duncan is when it comes to the nuts and bolts of what sort of culture we’re actually trying to build. I have a lot to say on this, but at least some of it is stuff that I’d like to chat with Duncan about in a separate format from this, because I think the internet is uniquely bad for hashing out the details.
I should note that I do think this post contains all of my cruxes re: disagreement; i.e. that in every case where you’ve strongly disagreed with me about norms or policies or how-a-given-conversation-should-go, the principle I was acting on was among those laid out here.
(Most specifically: when it’s justified to punch back, what counts as not-meeting-the-standard-of-rationality, and how much leadership is obligated to actively defend the right-but-unpopular.)
From my own past experience as a mod and admin, I predict that scale concerns are miscalibrated. It’s a lot at first, but just as some teachers wage an unending losing battle against student misbehavior while others basically see no problems at all, ever … once the standard is set and enforcement is clear, consistent, and credible, problems 90% stop occurring.
(I acknowledge that I have not fully responded to all of your points; I wanted to register these things quickly but other stuff is probably worth responding to later in other comments.)
From my own past experience as a mod and admin, I predict that scale concerns are miscalibrated. It’s a lot at first, but just as some teachers wage an unending losing battle against student misbehavior while others basically see no problems at all, ever … once the standard is set and enforcement is clear, consistent, and credible, problems 90% stop occurring.
Yeah, I can definitely imagine this being the case. I don’t have strong opinions on this concept, although I’m in part worried that doing it right involves a lot of skill, and failing to do it right may make things worse.
I should note that I do think this post contains all of my cruxes re: disagreement;
Yeah, it makes sense that this is the parts that seem like most salient points of disagreement. But I think it’s fairly important (and has been the last couple times we talked about this), that I agree with most of the cruxes listed here, and yet disagree with the conclusion. So it’s important that whatever is causing the disagreement isn’t actually covered here (or at least, not covered sufficiently)
Note 1: I won’t be commenting on Duncan’s comments on Benquo’s comments because I’m still in the process with chatting with Benquo about it.
… I note that there’s a point in the near future at which continued lack of any public action by the LW team stops being “we want to take our time and get this right and not add fuel to the fire” and starts being a de facto endorsement and a taking of sides (since the comments I claim are objectionable remain visible to all, net upvoted, more than a week later, sans any moderating influence or perspective).
Separately, I also note that benquo’s made a comment here that I really really really want to reply to directly, but that my model of the LW leadership prefers no engagement until there’s facilitation in place. I’m not clear on whether or not there’s a plan to make that happen.
This article gave me a bunch of food for thought. I don’t think it addresses my main cruxes re: previous disagreements I’ve had with Duncan, but it definitely gave me some new ideas and new vantage points to view old ones.
(Note 1: I won’t be commenting on Duncan’s comments on Benquo’s comments because I’m still in the process with chatting with Benquo about it. I have a number of relevant disagreements with both Ben and Duncan, hope to resolve those disagreements at some point, but meanwhile don’t have the bandwidth I’d require to engage with both of them at once)
Some thoughts so far:
1. Hierarchy of Goals
The hierarchy of “purposes of LessWrong” that Duncan describes is roughly the same one I’d describe. A concern or difference in framing I have here is that several the of the stages reinforce each other in a cyclical fashion.
I’m not quite sure you can cleanly prioritize truth over truthseeking culture.
If our culture isn’t outputting useful accumulation of knowledge, then it’s failing at our core mission. Definitely. But in the situations where truthseeking-culture vs truth seem to be in conflict, I think it’s often because there’s a hard problem that defies easy answers. (see “Decoupled vs Contextual” and “Tensions in Truthseeking” for rough examples).
2. “Every Comment Gets Read, and acted on in some fashion”
I agree that this is an important goal worth striving for. I don’t think it’s achievable for the immediate future due to resource constraints, but useful to have in mind as the bar to measure against.
I think we’ve actually recently made a lot of progress on 80/20ing this – the new moderator sidebar forces someone to engage with all new posts, and all “at-risk” comments – those created by new users, or which have been downvoted or reported.
I liked Duncan’s proposed UI of “all un-read-by-moderators comments appear highlighted to mods”. I think it might be worth implementing something similar so that we gain a visceral sense of how close or far we currently are to “100% comment-check-coverage.”
3. Limited Resources, tool building
We do have pretty limited resources, and a fairly high bar for who we trust as a moderator (esp. at the level that Duncan is describing here).This is exacerbated because half the mods also being developers, so there is a direct tradeoff between building tools, and engaging in high bandwidth communication (as well as putting organizational capacity into finding more moderators we trust, vs, say, building out open source documentation).
So, for the immediate future I do lean towards solutions like “build tools that help make moderation as easy as possible”, rather than “make sure to fully engage with every comment that doesn’t uphold our standards.” (for example, right now there isn’t actually a tool that with one click, locks an entire subthread – I can block replies to an individual comment, I can lock all comments on a post, but I can’t lock all replies to all children of a comment, and this crimps the ability to put-things-in-stasis until a time when I have enough energy to moderate thoroughly.
3. Collegial Culture
I do think all things being equal, high-bandwidth communication is ideal. I like the general thrust of the approach described in the initial example:
This basically is how I’d prefer myself to engage with most comments that don’t live up to a standard, and insofar as I don’t communicate that way it’s (usually) because I’m either under time-pressure/stress/triggered, or perhaps more generally I haven’t processed the overall strategy into a S1-response that flows smoothly in high stakes situations.
I think Vaniver does this sort of thing better than me. It was helpful to me to have some of the details of this spelled out so that I could pay more attention to them.
The main potential disagreement I have here is the scale of intervention within a single conversation that Duncan is suggesting. I can definitely imagine this turning out to be important, but I can also easily imagine it turning out to derailing most conversations into meta commentary on themselves.
4. Operationalization...
The part where I periodically disagree with Duncan is when it comes to the nuts and bolts of what sort of culture we’re actually trying to build. I have a lot to say on this, but at least some of it is stuff that I’d like to chat with Duncan about in a separate format from this, because I think the internet is uniquely bad for hashing out the details.
I should note that I do think this post contains all of my cruxes re: disagreement; i.e. that in every case where you’ve strongly disagreed with me about norms or policies or how-a-given-conversation-should-go, the principle I was acting on was among those laid out here.
(Most specifically: when it’s justified to punch back, what counts as not-meeting-the-standard-of-rationality, and how much leadership is obligated to actively defend the right-but-unpopular.)
From my own past experience as a mod and admin, I predict that scale concerns are miscalibrated. It’s a lot at first, but just as some teachers wage an unending losing battle against student misbehavior while others basically see no problems at all, ever … once the standard is set and enforcement is clear, consistent, and credible, problems 90% stop occurring.
(I acknowledge that I have not fully responded to all of your points; I wanted to register these things quickly but other stuff is probably worth responding to later in other comments.)
Yeah, I can definitely imagine this being the case. I don’t have strong opinions on this concept, although I’m in part worried that doing it right involves a lot of skill, and failing to do it right may make things worse.
Yeah, it makes sense that this is the parts that seem like most salient points of disagreement. But I think it’s fairly important (and has been the last couple times we talked about this), that I agree with most of the cruxes listed here, and yet disagree with the conclusion. So it’s important that whatever is causing the disagreement isn’t actually covered here (or at least, not covered sufficiently)
… I note that there’s a point in the near future at which continued lack of any public action by the LW team stops being “we want to take our time and get this right and not add fuel to the fire” and starts being a de facto endorsement and a taking of sides (since the comments I claim are objectionable remain visible to all, net upvoted, more than a week later, sans any moderating influence or perspective).
Separately, I also note that benquo’s made a comment here that I really really really want to reply to directly, but that my model of the LW leadership prefers no engagement until there’s facilitation in place. I’m not clear on whether or not there’s a plan to make that happen.