Useful exploration for any somewhat-intellectual community which prefers not to cause (or be blamed for) malicious behavior.
I think you’re missing an option, though. You can specifically disavow and oppose the malicious actions/actors, and point out that they are not part of your cause, and are actively hurting it. No censorship, just clarity that this hurts you and the cause. Depending on your knowledge of the perpetrators and the crimes, backing this up by turning them or actively thwarting them may be in scope as well.
Don’t censor yourself at all—that’s a (possibly unintentional, but that doesn’t matter) blackmail response that does more harm than good.
I think you’re missing an option, though. You can specifically disavow and oppose the malicious actions/actors, and point out that they are not part of your cause, and are actively hurting it. No censorship, just clarity that this hurts you and the cause. Depending on your knowledge of the perpetrators and the crimes, backing this up by turning them or actively thwarting them may be in scope as well.
There is a practical issue with this solution in the era of modern social media. Suppose you have malicious actors who go on to act in your name, but you never would have associated yourself with them under normal circumstances because they don’t represent your values. If you tell them to stand down or condemn them, then you’ve associated yourself with them, and that condemnation can be used against you.
To be clear, “stand down” is not condemning. “F them and their destructive actions” is condemning. In more formal settings, “I do not support X, and I do not want anything to do with people doing X”.
A few examples of clear condemnation being used against someone, where that retaliation is worse than the implied association of doing nothing, would help me understand your comment.
Note that If they’re not ALREADY associated with you in some way (through their actions and publicity, referencing your reputation without your consent), you don’t need to respond in any way. That’s a pretty easy option 4, I think.
That sounds similar to what I called “Option 3”; “You gradually change or improve the community while doing minor self censorship.”.
I think that doing this is highly challenging and very far from being trivial. Online you can often barely tell who your followers are for one. I think that one should try to do things like you mention, but don’t think it’s enough in most settings, for people with sizeable (thousands of people + ) audiences.
I think I was mislead by the words “gradually” and “community” in option 3. I think that direct opposition to the bad actions as distinct option. It does improve the community (by removing the bad), and I guess it’s gradual because there’s always more, but it didn’t feel the same to me.
I don’t claim it’s trivial, but it’s not impossible—you know about the problem, because it’s the problem you’re reacting to! “the criminals who do X are not part of our community—everyone please shun them” is a minimum, and in some cases you can follow up with actual specifics.
Note—I don’t have any significant public presence, so I may be severely underestimating the complexity. I still think the discovery problem is inversely correlated with the severity of the problem itself. Note also that I’m not paying much attention to EA forum, so if there is a specific problem that this is generalizing from (which is implied but not explained in the EA comments), it may be different from the examples available to me.
Useful exploration for any somewhat-intellectual community which prefers not to cause (or be blamed for) malicious behavior.
I think you’re missing an option, though. You can specifically disavow and oppose the malicious actions/actors, and point out that they are not part of your cause, and are actively hurting it. No censorship, just clarity that this hurts you and the cause. Depending on your knowledge of the perpetrators and the crimes, backing this up by turning them or actively thwarting them may be in scope as well.
Don’t censor yourself at all—that’s a (possibly unintentional, but that doesn’t matter) blackmail response that does more harm than good.
There is a practical issue with this solution in the era of modern social media. Suppose you have malicious actors who go on to act in your name, but you never would have associated yourself with them under normal circumstances because they don’t represent your values. If you tell them to stand down or condemn them, then you’ve associated yourself with them, and that condemnation can be used against you.
To be clear, “stand down” is not condemning. “F them and their destructive actions” is condemning. In more formal settings, “I do not support X, and I do not want anything to do with people doing X”.
A few examples of clear condemnation being used against someone, where that retaliation is worse than the implied association of doing nothing, would help me understand your comment.
Note that If they’re not ALREADY associated with you in some way (through their actions and publicity, referencing your reputation without your consent), you don’t need to respond in any way. That’s a pretty easy option 4, I think.
Yeah, this also seemed to me like the primary alternative missed in that section.
Thanks for the feedback!
That sounds similar to what I called “Option 3”; “You gradually change or improve the community while doing minor self censorship.”.
I think that doing this is highly challenging and very far from being trivial. Online you can often barely tell who your followers are for one. I think that one should try to do things like you mention, but don’t think it’s enough in most settings, for people with sizeable (thousands of people + ) audiences.
I think I was mislead by the words “gradually” and “community” in option 3. I think that direct opposition to the bad actions as distinct option. It does improve the community (by removing the bad), and I guess it’s gradual because there’s always more, but it didn’t feel the same to me.
I don’t claim it’s trivial, but it’s not impossible—you know about the problem, because it’s the problem you’re reacting to! “the criminals who do X are not part of our community—everyone please shun them” is a minimum, and in some cases you can follow up with actual specifics.
Note—I don’t have any significant public presence, so I may be severely underestimating the complexity. I still think the discovery problem is inversely correlated with the severity of the problem itself. Note also that I’m not paying much attention to EA forum, so if there is a specific problem that this is generalizing from (which is implied but not explained in the EA comments), it may be different from the examples available to me.