Rather than relying on the moderator to actually moderate, use the model to predict what the moderator would do.
I’ll tentatively call this arrangement “virtual moderation.”
...
Note that if the community can’t do the work of moderating, i.e. if the moderator was the only source of signal about what content is worth showing, then this can’t work.
Does the “this” in “this can’t work” refer to something other than the virtual moderation proposal, or are you saying that even virtual moderation can’t work w/o the community doing work? If so, I’m confused, because I thought I was supposed to understand virtual moderation as moderation-by-machine.
Oh, did you mean that the community has to interact with a post/comment (by e.g. upvoting it) enough for the ML system to have some data to base its judgments on?
I had been imagining that the system could form an opinion w/o the benefit of any reader responses, just from some analysis of the content (character count, words used, or even NLP), as well as who wrote it and in what context.
In the long run that’s possible, but I don’t think that existing ML is nearly good enough to do that (especially given that people can learn to game such features).
(In general when I talk about ML in the context of moderation or discussion fora or etc., I’ve been imagining that user behavior is the main useful signal.)
Does the “this” in “this can’t work” refer to something other than the virtual moderation proposal, or are you saying that even virtual moderation can’t work w/o the community doing work? If so, I’m confused, because I thought I was supposed to understand virtual moderation as moderation-by-machine.
Oh, did you mean that the community has to interact with a post/comment (by e.g. upvoting it) enough for the ML system to have some data to base its judgments on?
I had been imagining that the system could form an opinion w/o the benefit of any reader responses, just from some analysis of the content (character count, words used, or even NLP), as well as who wrote it and in what context.
In the long run that’s possible, but I don’t think that existing ML is nearly good enough to do that (especially given that people can learn to game such features).
(In general when I talk about ML in the context of moderation or discussion fora or etc., I’ve been imagining that user behavior is the main useful signal.)