I’d like to ask you the following. How would you, as an editor (moderator), handle dangerous information that are more harmful the more people know about it? Just imagine a detailed description of how to code an AGI or create bio weapons. Would you stay away from censoring such information in favor of free speech?
The subject matter here has a somewhat different nature that rather fits a more people—more probable pattern. The question is if it is better to discuss it as to possible resolve it or to censor it and thereby impede it. The problem is that this very question can not be discussed without deciding to not censor it. That doesn’t mean that people can not work on it, but rather just a few people in private. It is very likely that those people who already know about it are the most likely to solve the issue anyway. The general public would probably only add noise and make it much more likely to happen by simply knowing about it.
How would you, as an editor (moderator), handle dangerous information that are more harmful the more people know about it?
Step 1. Write down the clearest non-dangerous articulation of the boundaries of the dangerous idea that I could.
If necessary, make this two articulations: one that is easy to understand (in the sense of answering “is what I’m about to say a problem?”) even if it’s way overinclusive, and one that is not too overinclusive even if it requires effort to understand. Think of this as a cheap test with lots of false positives, and a more expensive follow-up test.
Add to this the most compelling explanation I can come up with of why violating those boundaries is dangerous that doesn’t itself violate those boundaries.
Step 2. Create a secondary forum, not public-access (e.g., a dangerous-idea mailing list), for the discussion of the dangerous idea. Add all the people I think belong there. If that’s more than just me, run my boundary articulation(s) past the group and edit as appropriate.
Step 3. Create a mechanism whereby people can request to be added to dangerous-idea. (e.g., sending dangerous-idea-request).
Step 4. Publish the boundary articulations, a request that people avoid any posts or comments that violate those boundaries, an overview of what steps are being taken (if any) by those in the know, and a pointer to dangerous-idea-request for anyone who feels they really ought to be included in discussion of it (with no promise of actually adding them).
Step 5. In forums where I have editorial control, censor contributions that violate those boundaries, with a pointer to the published bit in step 4.
==
That said, if it genuinely is the sort of thing where a suppression strategy can work, I would also breathe a huge sigh of relief for having dodged a bullet, because in most cases it just doesn’t.
A real-life example that people might accept the danger of would be the 2008 DNS flaw discovered by Dan Kaminsky—he discovered something really scary for the Internet and promptly assembled a DNS Cabal to handle it.
And, of course, it leaked before a fix was in place. But the delay did, they think, mitigate damage.
Note that the solution had to be in place very quickly indeed, because Kaminsky assumed that if he could find it, others could. Always assume you aren’t the only person in the whole world smart enough to find the flaw.
Interesting. Do you have links? I rather publicly vowed to undo any assumed existential risk savings EY thought were to be had via censorship.
That one stayed up, and although I haven’t been the most vigilant in checking for deletions, I had (perhaps naively) assumed they stopped after that :-/
Ahh. Are you aware of any other deletions?
Here...
I’d like to ask you the following. How would you, as an editor (moderator), handle dangerous information that are more harmful the more people know about it? Just imagine a detailed description of how to code an AGI or create bio weapons. Would you stay away from censoring such information in favor of free speech?
The subject matter here has a somewhat different nature that rather fits a more people—more probable pattern. The question is if it is better to discuss it as to possible resolve it or to censor it and thereby impede it. The problem is that this very question can not be discussed without deciding to not censor it. That doesn’t mean that people can not work on it, but rather just a few people in private. It is very likely that those people who already know about it are the most likely to solve the issue anyway. The general public would probably only add noise and make it much more likely to happen by simply knowing about it.
Step 1. Write down the clearest non-dangerous articulation of the boundaries of the dangerous idea that I could.
If necessary, make this two articulations: one that is easy to understand (in the sense of answering “is what I’m about to say a problem?”) even if it’s way overinclusive, and one that is not too overinclusive even if it requires effort to understand. Think of this as a cheap test with lots of false positives, and a more expensive follow-up test.
Add to this the most compelling explanation I can come up with of why violating those boundaries is dangerous that doesn’t itself violate those boundaries.
Step 2. Create a secondary forum, not public-access (e.g., a dangerous-idea mailing list), for the discussion of the dangerous idea. Add all the people I think belong there. If that’s more than just me, run my boundary articulation(s) past the group and edit as appropriate.
Step 3. Create a mechanism whereby people can request to be added to dangerous-idea. (e.g., sending dangerous-idea-request).
Step 4. Publish the boundary articulations, a request that people avoid any posts or comments that violate those boundaries, an overview of what steps are being taken (if any) by those in the know, and a pointer to dangerous-idea-request for anyone who feels they really ought to be included in discussion of it (with no promise of actually adding them).
Step 5. In forums where I have editorial control, censor contributions that violate those boundaries, with a pointer to the published bit in step 4.
==
That said, if it genuinely is the sort of thing where a suppression strategy can work, I would also breathe a huge sigh of relief for having dodged a bullet, because in most cases it just doesn’t.
A real-life example that people might accept the danger of would be the 2008 DNS flaw discovered by Dan Kaminsky—he discovered something really scary for the Internet and promptly assembled a DNS Cabal to handle it.
And, of course, it leaked before a fix was in place. But the delay did, they think, mitigate damage.
Note that the solution had to be in place very quickly indeed, because Kaminsky assumed that if he could find it, others could. Always assume you aren’t the only person in the whole world smart enough to find the flaw.
Yes, several times other poster’s have brought up the subject and had their comments deleted.
I hadn’t seen a lot of stubs of deleted comments around before the recent episode, but you say people’s comments had gotten deleted several times.
So, have you seen comments being deleted in a special way that doesn’t leave a stub?
Comments only leave a stub if they have replies that aren’t deleted.
Interesting. Do you have links? I rather publicly vowed to undo any assumed existential risk savings EY thought were to be had via censorship.
That one stayed up, and although I haven’t been the most vigilant in checking for deletions, I had (perhaps naively) assumed they stopped after that :-/