I’ve had this on my to-review log all review season, and I guess I’m getting this in mere hours before it closes.
What does this post add to the conversation?
The most important piece I think this adds is that the problem is not simple.
I keep seeing people run into one of these community conflict issues, and propose simple and reasonable sounding ways to solve it. I do not currently believe the issue is impossible, but it’s not as simple as some folks think when they stare at it the first time.
How did this post affect you, your thinking, and your actions?
Context: For a time, Mingyuan’s role involved overseeing the global Astral Codex Ten community, particularly the in-person parts. When she resigned from that position, I’m the one who took it up. I got to see a version of this post before it went live, shortly after I took up the role.
According to her, more than 50% of the reason she stepped down was she became entangled in community conflicts. When you are something like a month into a new role and your predecessor hands you a document saying a problem is impossible and it also caused her to resign, and you already have a fresh instance of the problem in your inbox? That tends to affect you, your thinking, and your actions.
Does it make accurate claims? Does it carve reality at the joints? How do you know? / Is there a subclaim of this post that you can test?
Well, I haven’t stepped down because of becoming entangled in community conflict yet. This kind of thing is in the top three most likely reasons I step down if I do someday resign though. (Citation, my my own self evaluation.)
Do investigations eat up hundreds of person-hours? Yep, they totally can if you let them. Skill and practice and a willingness to triage can cut that down a lot.
Do panels generally have much real ability to enforce things? Here I half disagree. The rationality community in particular is fuzzy and amorphous, without a clear single roster or doorway. In that situation, the panel can’t enforce things. More structured communities- including sub-communities within the rationalist community- can potentially give panels enforcement ability. The LessWrong site itself, the r/rational discord, or an organization and venue space like Mox can delegate a ban decision to a panel and enforce that. That said, this requires the panel being granted this capability, and for the enforcing entity to carry it out.
If you just try and convene a panel mid-conflict, and it doesn’t have a scope or actual authority to do stuff (“actual authority” being, they’ve got the ban command on the forum, or keys to the dormitory, or the people who do will likely follow the panel’s advisement, or something like that) then yeah they’re not going to have much ability to enforce anything.
Do panels act like they are courts of law? Unclear. The ones I’m most familiar with had literal lawyers on them, though not in a professional capacity. I think a little of this is that the panel is trying to have some higher standard of evidence, but also that trust isn’t transitive; it’s just much easier for me to feel confident in what I experienced than it is for a panel to feel confident that I’m relaying my experience accurately.
Do panels often lack a secure sense of legitimacy? Unclear, I’m not in their heads. I do think that giving a panel a clear scope is very useful.
Do favourable rulings lend legitimacy to bad actors? Yep, and more than you’d think. It’s just a very straightforward move if you’re a bad actor who a panel has investigated and deemed fine to bring up that fact as often as it seems helpful. It may be a useful intuition pump to picture some social deception game where an ability can detect what team someone’s on, which has somehow been spoofed or misread. One thing that caught me by surprise is that the rulings don’t even need to be straightforwardly favourable: I’ve seen mixed rulings be quoted in ways I consider pretty out of context.
Are extended investigations stressful for all parties involved? Unclear, I’m not in their heads, but people do keep saying this is true.
Is there often no way to find the objective truth? Yes, at least no reasonable way. One relevant skill is sifting through various claims and noticing which ones would be potentially provable. And unmentioned here is how often no one particular objective truth is significant.
What followup work would you like to see building on this post?
I’ve been trying!
Interest In Conflict Is Instrumentally Convergent is my single best followup, if I had to pick one. I really wish I’d written up more of what I learned from other communities. The rationality community is odd in many ways, but there are things we can learn. Sci-fi conventions, service non-profits, sports leagues, community colleges, martial arts dojos, small churches, all of these groups and more have lessons for us in how to manage and respond to incidents.
If you’re part of a community with a process for incident response, I’d be grateful if you’d write up how that works. You don’t need to give details on any specific incidents; just walking through the steps that would get taken for a couple hypotheticals, and who would handle each part, is useful for comparison.
So, what to vote?
It’s a good essay. It’s going in my collection of things people should read if they’re interested in the topic. It’s relevant to LessWrongers, and it’s relevant to most communities at least as a theory. It’s spot on relevant for my sub-special interest of the last year.
For all that, I personally gave it a +1. I can see a pretty clear argument for +4 and might change my mind. It’s a bit more niche than I’d want for +4, I think it’s coming from a place of (empathized with) frustration and I wish it had some more actionable moves or improvements to offer.
I’ve had this on my to-review log all review season, and I guess I’m getting this in mere hours before it closes.
What does this post add to the conversation?
The most important piece I think this adds is that the problem is not simple.
I keep seeing people run into one of these community conflict issues, and propose simple and reasonable sounding ways to solve it. I do not currently believe the issue is impossible, but it’s not as simple as some folks think when they stare at it the first time.
How did this post affect you, your thinking, and your actions?
Context: For a time, Mingyuan’s role involved overseeing the global Astral Codex Ten community, particularly the in-person parts. When she resigned from that position, I’m the one who took it up. I got to see a version of this post before it went live, shortly after I took up the role.
According to her, more than 50% of the reason she stepped down was she became entangled in community conflicts. When you are something like a month into a new role and your predecessor hands you a document saying a problem is impossible and it also caused her to resign, and you already have a fresh instance of the problem in your inbox? That tends to affect you, your thinking, and your actions.
Does it make accurate claims? Does it carve reality at the joints? How do you know? / Is there a subclaim of this post that you can test?
Well, I haven’t stepped down because of becoming entangled in community conflict yet. This kind of thing is in the top three most likely reasons I step down if I do someday resign though. (Citation, my my own self evaluation.)
Do investigations eat up hundreds of person-hours? Yep, they totally can if you let them. Skill and practice and a willingness to triage can cut that down a lot.
Do panels generally have much real ability to enforce things? Here I half disagree. The rationality community in particular is fuzzy and amorphous, without a clear single roster or doorway. In that situation, the panel can’t enforce things. More structured communities- including sub-communities within the rationalist community- can potentially give panels enforcement ability. The LessWrong site itself, the r/rational discord, or an organization and venue space like Mox can delegate a ban decision to a panel and enforce that. That said, this requires the panel being granted this capability, and for the enforcing entity to carry it out.
If you just try and convene a panel mid-conflict, and it doesn’t have a scope or actual authority to do stuff (“actual authority” being, they’ve got the ban command on the forum, or keys to the dormitory, or the people who do will likely follow the panel’s advisement, or something like that) then yeah they’re not going to have much ability to enforce anything.
Do panels act like they are courts of law? Unclear. The ones I’m most familiar with had literal lawyers on them, though not in a professional capacity. I think a little of this is that the panel is trying to have some higher standard of evidence, but also that trust isn’t transitive; it’s just much easier for me to feel confident in what I experienced than it is for a panel to feel confident that I’m relaying my experience accurately.
Do panels often lack a secure sense of legitimacy? Unclear, I’m not in their heads. I do think that giving a panel a clear scope is very useful.
Do favourable rulings lend legitimacy to bad actors? Yep, and more than you’d think. It’s just a very straightforward move if you’re a bad actor who a panel has investigated and deemed fine to bring up that fact as often as it seems helpful. It may be a useful intuition pump to picture some social deception game where an ability can detect what team someone’s on, which has somehow been spoofed or misread. One thing that caught me by surprise is that the rulings don’t even need to be straightforwardly favourable: I’ve seen mixed rulings be quoted in ways I consider pretty out of context.
Are extended investigations stressful for all parties involved? Unclear, I’m not in their heads, but people do keep saying this is true.
Is there often no way to find the objective truth? Yes, at least no reasonable way. One relevant skill is sifting through various claims and noticing which ones would be potentially provable. And unmentioned here is how often no one particular objective truth is significant.
What followup work would you like to see building on this post?
I’ve been trying!
Interest In Conflict Is Instrumentally Convergent is my single best followup, if I had to pick one. I really wish I’d written up more of what I learned from other communities. The rationality community is odd in many ways, but there are things we can learn. Sci-fi conventions, service non-profits, sports leagues, community colleges, martial arts dojos, small churches, all of these groups and more have lessons for us in how to manage and respond to incidents.
If you’re part of a community with a process for incident response, I’d be grateful if you’d write up how that works. You don’t need to give details on any specific incidents; just walking through the steps that would get taken for a couple hypotheticals, and who would handle each part, is useful for comparison.
So, what to vote?
It’s a good essay. It’s going in my collection of things people should read if they’re interested in the topic. It’s relevant to LessWrongers, and it’s relevant to most communities at least as a theory. It’s spot on relevant for my sub-special interest of the last year.
For all that, I personally gave it a +1. I can see a pretty clear argument for +4 and might change my mind. It’s a bit more niche than I’d want for +4, I think it’s coming from a place of (empathized with) frustration and I wish it had some more actionable moves or improvements to offer.