I like this. More broadly, I’d like it if the visibility and impact of one’s reaction to a post corresponded to the effort put into expressing that reaction. Even a quick one-line comment conveys a lot more information than an up or downvote, yet votes affect the post’s visibility much more than the one-line comment.
What if, for example, visibility of posts was controlled by something like sentiment analysis in the comments? That in itself would almost certainly be a terrible solution, but maybe there’s a way to make it work. For example, imagine that the user was prompted for a response when they up- or downvoted. The user’s karma would affect the maximum base vote strength, and the base vote strength would be amplified by the length and sentiment of the comment itself.
One downside is that this would bias visibility toward the preferences of heavy commenters, and that may not actually be the people you want driving visibility. Paul Christiano doesn’t comment on this site all that much, but I’d rather have his preferences driving AI alignment post visibility than those of some very loud and frequent LessWrong commenter with a lower level of expertise.
I like this. More broadly, I’d like it if the visibility and impact of one’s reaction to a post corresponded to the effort put into expressing that reaction. Even a quick one-line comment conveys a lot more information than an up or downvote, yet votes affect the post’s visibility much more than the one-line comment.
What if, for example, visibility of posts was controlled by something like sentiment analysis in the comments? That in itself would almost certainly be a terrible solution, but maybe there’s a way to make it work. For example, imagine that the user was prompted for a response when they up- or downvoted. The user’s karma would affect the maximum base vote strength, and the base vote strength would be amplified by the length and sentiment of the comment itself.
One downside is that this would bias visibility toward the preferences of heavy commenters, and that may not actually be the people you want driving visibility. Paul Christiano doesn’t comment on this site all that much, but I’d rather have his preferences driving AI alignment post visibility than those of some very loud and frequent LessWrong commenter with a lower level of expertise.