I think the governance problems you’re describing are hard for two distinct but related reasons worth addressing separately:
Disagreement on values
Overly broad scope of penalties
Disagreement on Values
Not everyone relates to governance and penalties the same way.
Some people have the naïvely appropriate attitude that governance is to protect public goods; if someone is likely to behave in ways that are dangerous or harmful to others we should identify them, and exclude them when needed to protect others. Punishments are a cost, and the point of graduated punishment and judicial process is to make sure they’re imposed only in cases where the benefit outweighs the cost.
Other people relate to punishments as a scapegoating process where from time to time the vibes demand we other someone, and the narrative about why we’re doing that is just part of the way we negotiate who gets othered. The fact that this is obviously illegitimate and parasitic on the naive attitude doesn’t prevent it from being how a lot of people feel.
Some people instrumentalize the whole thing as a tool to be used against their personal enemies, or a way to demonstrate factional loyalty.
Yet others see some positive opportunity in being naughty and getting away with things, will try to help out others who seem cool, and will try to derail any investigation that threatens this sort of covert mutual protection league.
Obviously we’re not going to get the naïvely good outcomes from a process where a large minority if not an outright majority of people are trying to derail the process one way or another, so there’s little point in trying if we’re not thinking about the adversarial problem explicitly.
Overly broad scope of penalties
One of the major strategic wins scapegoating has scored against pronormativity is that people seem to assume that the natural punishment for sufficiently objectionable behavior is banning and shunning, regardless of the scope of the behavior. In many cases this is a bad penalty to oppose, and opposing excessive penalties will motivate people otherwise sympathetic to the investigative process to opt out, derail, or even exit the community entirely, further worsening the problem of an antinormative majority.
For example, if someone tends to behave objectionably in certain ways to romantic partners, the obvious remedy is to make this information available to people considering dating them, so they can decide whether they find that behavior objectionable enough to avoid; excluding them from common spaces is just ridiculous.
I stopped engaging with or supporting the Bay Area EA / Rationalist Community’s attempts to create public spaces, in response to a couple of cases where it seemed like the “investigative” process had been totally compromised by the compulsion to “splitting” behavior where formal community institutions were not trying to figure out and publicize what had happened and why and then come up with rational risk-mitigation measures, but instead trying to figure out who the “bad guy” was and exclude them. For instance, the Brent Dill “investigations” resulted in penalties but no clear findings, and then the only friend who had bothered to help me notice that I’d been participating in a culture of communal dishonesty around EA and AI risk, and figure out how to deconfuse myself about this, was banned for unspecified reasons. Functionally I think I’m justified in construing this as a coverup, not an attempt at enforcing generalizable norms.
I think the governance problems you’re describing are hard for two distinct but related reasons worth addressing separately:
Disagreement on values
Overly broad scope of penalties
Disagreement on Values
Not everyone relates to governance and penalties the same way.
Some people have the naïvely appropriate attitude that governance is to protect public goods; if someone is likely to behave in ways that are dangerous or harmful to others we should identify them, and exclude them when needed to protect others. Punishments are a cost, and the point of graduated punishment and judicial process is to make sure they’re imposed only in cases where the benefit outweighs the cost.
Other people relate to punishments as a scapegoating process where from time to time the vibes demand we other someone, and the narrative about why we’re doing that is just part of the way we negotiate who gets othered. The fact that this is obviously illegitimate and parasitic on the naive attitude doesn’t prevent it from being how a lot of people feel.
Some people instrumentalize the whole thing as a tool to be used against their personal enemies, or a way to demonstrate factional loyalty.
Yet others see some positive opportunity in being naughty and getting away with things, will try to help out others who seem cool, and will try to derail any investigation that threatens this sort of covert mutual protection league.
Obviously we’re not going to get the naïvely good outcomes from a process where a large minority if not an outright majority of people are trying to derail the process one way or another, so there’s little point in trying if we’re not thinking about the adversarial problem explicitly.
Overly broad scope of penalties
One of the major strategic wins scapegoating has scored against pronormativity is that people seem to assume that the natural punishment for sufficiently objectionable behavior is banning and shunning, regardless of the scope of the behavior. In many cases this is a bad penalty to oppose, and opposing excessive penalties will motivate people otherwise sympathetic to the investigative process to opt out, derail, or even exit the community entirely, further worsening the problem of an antinormative majority.
For example, if someone tends to behave objectionably in certain ways to romantic partners, the obvious remedy is to make this information available to people considering dating them, so they can decide whether they find that behavior objectionable enough to avoid; excluding them from common spaces is just ridiculous.
I stopped engaging with or supporting the Bay Area EA / Rationalist Community’s attempts to create public spaces, in response to a couple of cases where it seemed like the “investigative” process had been totally compromised by the compulsion to “splitting” behavior where formal community institutions were not trying to figure out and publicize what had happened and why and then come up with rational risk-mitigation measures, but instead trying to figure out who the “bad guy” was and exclude them. For instance, the Brent Dill “investigations” resulted in penalties but no clear findings, and then the only friend who had bothered to help me notice that I’d been participating in a culture of communal dishonesty around EA and AI risk, and figure out how to deconfuse myself about this, was banned for unspecified reasons. Functionally I think I’m justified in construing this as a coverup, not an attempt at enforcing generalizable norms.
Related:
https://benjaminrosshoffman.com/guilt-shame-and-depravity/
https://unstableontology.com/2021/04/12/on-commitments-to-anti-normativity/