As people have observed, there are possible concerns with manipulation. I think these can be addressed, but they might be a serious problem or might require strong theoretical machinery.
It also seems like it requires solving a non-trivial ML problem (to do sufficiently efficient semi-supervised learning in this particular setting); I think this problem looks tractable, but in general most people won’t do something that has that kind of technical risk.
I don’t think that’s much of an answer. Maybe that’s the answer to why people have don’t all of this, but why haven’t people done some of this? Why does no one even copy what slashdot did in 1999? Reddit’s main adversary is manipulation, so the possibility that a new system would be manipulable isn’t any worse than the status quo. But it may be that they don’t explain their algorithms because they are afraid that this would make them more manipulable.
As people have observed, there are possible concerns with manipulation. I think these can be addressed, but they might be a serious problem or might require strong theoretical machinery.
It also seems like it requires solving a non-trivial ML problem (to do sufficiently efficient semi-supervised learning in this particular setting); I think this problem looks tractable, but in general most people won’t do something that has that kind of technical risk.
I don’t think that’s much of an answer. Maybe that’s the answer to why people have don’t all of this, but why haven’t people done some of this? Why does no one even copy what slashdot did in 1999? Reddit’s main adversary is manipulation, so the possibility that a new system would be manipulable isn’t any worse than the status quo. But it may be that they don’t explain their algorithms because they are afraid that this would make them more manipulable.