Circles of discussion

On Wednesday I had lunch with Raph Levien, and came away with a picture of how a website that fostered the highest quality discussion might work.

Principles:

  • It’s possible that the right thing is a quick fix to Less Wrong as it is; this is about exploring what could be done if we started anew.

  • If we decided to start anew, what the software should do is only one part of what would need to be decided; that’s the part I address here.

  • As Anna Salamon set out, the goal is to create a commons of knowledge, such that a great many people have read the same stuff. A system that tailored what you saw to your own preferences would have its own strengths but would work entirely against this goal.

  • I therefore think the right goal is to build a website whose content reflects the preferences of one person, or a small set of people. In what follows I refer to those people as the “root set”.

  • A commons needs a clear line between the content that’s in and the content that’s out. Much of the best discussion is on closed mailing lists; it will be easier to get the participation of time-limited contributors if there’s a clear line of what discussion we want them to have read, and it’s short.

  • However this alone excludes a lot of people who might have good stuff to add; it would be good to find a way to get the best of both worlds between a closed list and an open forum.

  • I want to structure discussion as a set of concentric circles.

  • Discussion in the innermost circle forms part of the commons of knowledge all can be assumed to be familiar with; surrounding it are circles of discussion where the bar is progressively lower. With a slider, readers choose which circle they want to read.

  • Content from rings further out may be pulled inwards by the votes of trusted people.

  • Content never moves outwards except in the case of spam/​abuse.

  • Users can create top-level content in further-out rings and allow the votes of other users to move it closer to the centre. Users are encouraged to post whatever they want in the outermost rings, to treat it as one would an open thread or similar; the best content will be voted inwards.

  • Trust in users flows through endorsements starting from the root set.

More specifics on what that vision might look like:

  • The site gives all content (posts, top-level comments, and responses) a star rating from 0 to 5 where 0 means “spam/​abuse/​no-one should see”.

  • The rating that content can receive is capped by the rating of the parent; the site will never rate a response higher than its parent, or a top-level comment higher than the post it replies to.

  • Users control a “slider” a la Slashdot which controls the level of content that they see: set to 4, they see only 4 and 5-star content.

  • By default, content from untrusted users gets two stars; this leaves a star for “unusually bad” (eg rude) and one for “actual spam or other abuse”.

  • Content ratings above 2 never go down, except to 0; they only go up. Thus, the content in these circles can grow but not shrink, to create a stable commons.

  • Since a parent’s rating acts as a cap on the highest rating a child can get, when a parent’s rating goes up, this can cause a child’s rating to go up too.

  • Users rate content on this 0-5 scale, including their own content; the site aggregates these votes to generate content ratings.

  • Users also rate other users on the same scale, for how much they are trusted to rate content.

  • There is a small set of “root” users whose user ratings are wholly trusted. Trust flows from these users using some attack resistant trust metric.

  • Trust in a particular user can always go down as well as up.

  • Only votes from the most trusted users will suffice to bestow the highest ratings on content.

  • The site may show more trusted users with high sliders lower-rated content specifically to ask them to vote on it, for instance if a comment is receiving high ratings from users who are one level below them in the trust ranking. This content will be displayed in a distinctive way to make this purpose clear.

  • Votes from untrusted users never directly affect content ratings, only what is shown to more trusted users to ask for a rating. Downvoting sprees from untrusted users will thus be annoying but ineffective.

  • The site may also suggest to more trusted users that they uprate or downrate particular users.

  • The exact algorithms by which the site rates content, hands trust to users, or asks users for moderation would probably want plenty of tweaking. Machine learning could help here. However, for an MVP something pretty simple would likely get the site off the ground easily.