How would you, as an editor (moderator), handle dangerous information that are more harmful the more people know about it?
Step 1. Write down the clearest non-dangerous articulation of the boundaries of the dangerous idea that I could.
If necessary, make this two articulations: one that is easy to understand (in the sense of answering “is what I’m about to say a problem?”) even if it’s way overinclusive, and one that is not too overinclusive even if it requires effort to understand. Think of this as a cheap test with lots of false positives, and a more expensive follow-up test.
Add to this the most compelling explanation I can come up with of why violating those boundaries is dangerous that doesn’t itself violate those boundaries.
Step 2. Create a secondary forum, not public-access (e.g., a dangerous-idea mailing list), for the discussion of the dangerous idea. Add all the people I think belong there. If that’s more than just me, run my boundary articulation(s) past the group and edit as appropriate.
Step 3. Create a mechanism whereby people can request to be added to dangerous-idea. (e.g., sending dangerous-idea-request).
Step 4. Publish the boundary articulations, a request that people avoid any posts or comments that violate those boundaries, an overview of what steps are being taken (if any) by those in the know, and a pointer to dangerous-idea-request for anyone who feels they really ought to be included in discussion of it (with no promise of actually adding them).
Step 5. In forums where I have editorial control, censor contributions that violate those boundaries, with a pointer to the published bit in step 4.
==
That said, if it genuinely is the sort of thing where a suppression strategy can work, I would also breathe a huge sigh of relief for having dodged a bullet, because in most cases it just doesn’t.
A real-life example that people might accept the danger of would be the 2008 DNS flaw discovered by Dan Kaminsky—he discovered something really scary for the Internet and promptly assembled a DNS Cabal to handle it.
And, of course, it leaked before a fix was in place. But the delay did, they think, mitigate damage.
Note that the solution had to be in place very quickly indeed, because Kaminsky assumed that if he could find it, others could. Always assume you aren’t the only person in the whole world smart enough to find the flaw.
Step 1. Write down the clearest non-dangerous articulation of the boundaries of the dangerous idea that I could.
If necessary, make this two articulations: one that is easy to understand (in the sense of answering “is what I’m about to say a problem?”) even if it’s way overinclusive, and one that is not too overinclusive even if it requires effort to understand. Think of this as a cheap test with lots of false positives, and a more expensive follow-up test.
Add to this the most compelling explanation I can come up with of why violating those boundaries is dangerous that doesn’t itself violate those boundaries.
Step 2. Create a secondary forum, not public-access (e.g., a dangerous-idea mailing list), for the discussion of the dangerous idea. Add all the people I think belong there. If that’s more than just me, run my boundary articulation(s) past the group and edit as appropriate.
Step 3. Create a mechanism whereby people can request to be added to dangerous-idea. (e.g., sending dangerous-idea-request).
Step 4. Publish the boundary articulations, a request that people avoid any posts or comments that violate those boundaries, an overview of what steps are being taken (if any) by those in the know, and a pointer to dangerous-idea-request for anyone who feels they really ought to be included in discussion of it (with no promise of actually adding them).
Step 5. In forums where I have editorial control, censor contributions that violate those boundaries, with a pointer to the published bit in step 4.
==
That said, if it genuinely is the sort of thing where a suppression strategy can work, I would also breathe a huge sigh of relief for having dodged a bullet, because in most cases it just doesn’t.
A real-life example that people might accept the danger of would be the 2008 DNS flaw discovered by Dan Kaminsky—he discovered something really scary for the Internet and promptly assembled a DNS Cabal to handle it.
And, of course, it leaked before a fix was in place. But the delay did, they think, mitigate damage.
Note that the solution had to be in place very quickly indeed, because Kaminsky assumed that if he could find it, others could. Always assume you aren’t the only person in the whole world smart enough to find the flaw.