Something like this could be good as a software as a service startup, or a project within a company like reddit/disqus/etc. if someone could get a job there and build it for the company hack day or something. But I’m less optimistic in the context of a small forum like LW.
LW has less data to train on than a big discussion provider like reddit or disqus.
LW has users who are smart enough to exploit weaknesses in the algorithm.
Smart people are already reading LW—it shouldn’t be hard to gather their opinions as they read, and any attempt to approximate their opinions will likely be inferior.
LW is not at a large enough scale to justify this level of automation. Right now the community is small enough that a single person could read and rate every comment without a lot of difficulty.
There’s a chance that you put a lot of work into this and not actually solve LW’s core issues. A lean approach to fixing LW would aim for the minimum viable fix. A lean approach to a new forum would aim for the minimum viable forum.
I’m more excited about looking at small online forums through the lens of institution design. Robin Hanson says there are a lot of ideas for institutions that aren’t being tried out. This represents a double coincidence of wants: online discussion is a problem in search of a solution, and prediction markets/eigendemocracy/etc. are solutions in search of problems.
Experimenting with new institutions seems highly valuable: Our current ones don’t work very well, and there could be a lot of room to improve. Experimentation lets us battle test new designs, and also helps build the résumé that new designs would need to see deployment on a large scale. Institutions are highly relevant to EA cause areas like global poverty, existential risk, and preservation of the EA movement’s values as it grows.
I’m more excited about looking at small online forums through the lens of institution design.
I think of virtual moderation as a central example of an institution that is relevant to online communities. What kind of institution do you have in mind?
The prediction itself could be done using machine learning, prediction markets, contractors who are paid based on the quality of their predictions, voting with informal norms about what is being voted on, etc.
Smart people are already reading LW—it shouldn’t be hard to gather their opinions as they read, and any attempt to approximate their opinions will likely be inferior
Those smart people’s judgments are the main data that is being used, we’re not trying to e.g. use textual features of comments to predict what smart people will think, we’re trying to use what smart people think to predict what some trusted moderation process would output.
I was responding specifically to the machine learning version of your proposal. My contention is that for a community that’s the size of LW, using machine learning to solve this seems a bit like killing a rat with a nuclear bomb. I also suspect that using an institution where all of the important decisions are made by humans is more likely to lead to knowledge that can reapplied outside of the context of an online forum.
Something like this could be good as a software as a service startup, or a project within a company like reddit/disqus/etc. if someone could get a job there and build it for the company hack day or something. But I’m less optimistic in the context of a small forum like LW.
LW has less data to train on than a big discussion provider like reddit or disqus.
LW has users who are smart enough to exploit weaknesses in the algorithm.
Smart people are already reading LW—it shouldn’t be hard to gather their opinions as they read, and any attempt to approximate their opinions will likely be inferior.
LW is not at a large enough scale to justify this level of automation. Right now the community is small enough that a single person could read and rate every comment without a lot of difficulty.
There’s a chance that you put a lot of work into this and not actually solve LW’s core issues. A lean approach to fixing LW would aim for the minimum viable fix. A lean approach to a new forum would aim for the minimum viable forum.
I’m more excited about looking at small online forums through the lens of institution design. Robin Hanson says there are a lot of ideas for institutions that aren’t being tried out. This represents a double coincidence of wants: online discussion is a problem in search of a solution, and prediction markets/eigendemocracy/etc. are solutions in search of problems.
Experimenting with new institutions seems highly valuable: Our current ones don’t work very well, and there could be a lot of room to improve. Experimentation lets us battle test new designs, and also helps build the résumé that new designs would need to see deployment on a large scale. Institutions are highly relevant to EA cause areas like global poverty, existential risk, and preservation of the EA movement’s values as it grows.
I think of virtual moderation as a central example of an institution that is relevant to online communities. What kind of institution do you have in mind?
The prediction itself could be done using machine learning, prediction markets, contractors who are paid based on the quality of their predictions, voting with informal norms about what is being voted on, etc.
Those smart people’s judgments are the main data that is being used, we’re not trying to e.g. use textual features of comments to predict what smart people will think, we’re trying to use what smart people think to predict what some trusted moderation process would output.
I was responding specifically to the machine learning version of your proposal. My contention is that for a community that’s the size of LW, using machine learning to solve this seems a bit like killing a rat with a nuclear bomb. I also suspect that using an institution where all of the important decisions are made by humans is more likely to lead to knowledge that can reapplied outside of the context of an online forum.