we can do this collectively—f.e. my clickbait is probably clickbaity for you too
This assumes good faith. As soon as enough people learn about the Guardian AI, I expect Twitter threads coordinating people: “let’s flag all outgroup content as ‘clickbait’”.
Just like people are abusing current systems by falsely labeling the content that want removed as “spam” or “porn” or “original research” or whichever label effectively means “this will be hidden from the audience”.
Oh yeah, definitely. I think such a system shouldn’t try to enforce one “truth”—which content is objectively good or bad.
I’d much rather see people forming groups, each with its own moderation rules. And let people be a part of multiple groups. There’s a lot of methods that could be tried out, f.e. some groups could use algorithms like EigenTrust, to decide how much to trust users.
But before we can get to that, I see a more prohibitive problem—that it will be hard to get enough people to get that system off the ground.
This assumes good faith. As soon as enough people learn about the Guardian AI, I expect Twitter threads coordinating people: “let’s flag all outgroup content as ‘clickbait’”.
Just like people are abusing current systems by falsely labeling the content that want removed as “spam” or “porn” or “original research” or whichever label effectively means “this will be hidden from the audience”.
Oh yeah, definitely. I think such a system shouldn’t try to enforce one “truth”—which content is objectively good or bad.
I’d much rather see people forming groups, each with its own moderation rules. And let people be a part of multiple groups. There’s a lot of methods that could be tried out, f.e. some groups could use algorithms like EigenTrust, to decide how much to trust users.
But before we can get to that, I see a more prohibitive problem—that it will be hard to get enough people to get that system off the ground.