[META] Building a rationalist communication system to avoid censorship
The recent disappearance of Star Slate Codex made me realise that censorship is a real threat to the rationalist community. Not hard, government mandated censorship, but censorship in the form of online mobs prepared to harass and threaten those seen to say the wrong thing.
The current choice for a rationalist with a controversial idea seems to be to publish it online, where the most angry mobs from around the world can access it easily, or not to publish at all.
My solution, digital infrastructure for a properly anonymous, hidden rationalist community.
Related to kolmogorov-complicity-and-the-parable-of-lightning (Now also deleted, but here are a few people discussing it)
So we need to create the social norms and digital technologies to allow good rationalist content to be created without fear of mobs. My current suggestions include.
1) Properly anonymous publishing. Each post that is put into this system is anonymous. If a rationalist posts many posts, then subtle clues about their identity could add up, so make each post independently anonymous. Given a specific post, you can’t find others by the same author. Record nothing more than the actual post. With many rationalists putting posts into this system, and none of the posts attributable to a specific person, mobs won’t be able to find a target. And noone knows who is putting posts into the pool at all.
2) Delay all published posts by a random time up to a week, we don’t want to give info away about your timezone, do we.
3) Only certain people can access the content. Maybe restrict viewing to people with enough less wrong karma. Maybe rate limit it to 4 posts a day or something, to make it harder to scrape the whole anonymous site. (Unrestricted anonymous posting, restricted viewing is an unusual dynamic)
4) Of course only some posts will warrant such levels of paranoia, so maybe these features could be something that can be turned on and off independently.
My current model of online mobs is that they are not particularly good at updating on subtle strands of evidence and digging around online. One person who wants to stir up a mob does the digging, and then posts the result somewhere obvious. This raises the possibility of misinformation. If we can’t stop one person putting our real name and address on a social media post where mobs can pass it around, could we put out other false names and addresses.
1) Precondition GPT-X on a sample of rationalist writings. Precondition another on samples of spam. Anything that causes more surprise on the rationalist net than the spam net is probably spam. (In general, AI based spam filtering)
2) Reuse codes. When you input a post, you can also put in a random codeword. Posts are given preferential treatment for the spam filtering if they are associated with a code that was also given with known good posts. codewords are hashed and salted before being stored on the server, along with a number representing reputation, and never shown. Posts are stored with their reputation + a small random offset.
3) Changing password. Every hour, come up with a new password. Whenever anyone with enough Karma requests any page, put the hours password in small text at the bottom of the page (or even in a html comment). When someone uses this password, you know that it was someone who visited some lesswrong page in the last hour, and can’t tell who. You could restrict viewing with the same password.
I look forward to a discussion of which cryptographic protocols are most suitable for building this.