Note: I don’t know if everyone is disagreeing with my idea or disagreeing with my opinion on LessWrong.
Maybe click “agree” on this sub-comment if you agree with my idea (independently of whether you agree with my LessWrong opinion), and vice versa for disagree.
I don’t like the idea. Here’s an alternative I’d like to propose:
AI mentoring
After a user gets a post or comment rejected, have them be given the opportunity to rewrite and resubmit it with the help of an AI mentor. The AI mentor should be able to give reasonably accurate feedback, and won’t accept the revision until it is clearly above a quality line.
I don’t think this is currently easy to make (well), because I think it would be too hard to get current LLMs to be sufficiently accurate in LessWrong specific quality judgement and advice. If, at some point in the future, this became easy for the devs to add, I think it would be a good feature. Also, if an AI with this level of discernment were available, it could help the mods quite a bit in identifying edge cases and auto-resolving clear-cut cases.
I like it, it is worth a try because it could be very helpful if it works!
A possible objection is that “you can’t mentor others on something you suck yourself,” and this would require AGI capable of making valuable LessWrong comments themselves, which may be similarly hard to automating AI research (considering the math/programming advantages of LLMs).
This objection doesn’t doom your idea, because even if the AI is bad at writing valuable comments, and bad at judging valuable comments written by itself, it may be good at judging the failure modes where a human writes a bad comments. It could still work and is worth a try!
Note: I don’t know if everyone is disagreeing with my idea or disagreeing with my opinion on LessWrong.
Maybe click “agree” on this sub-comment if you agree with my idea (independently of whether you agree with my LessWrong opinion), and vice versa for disagree.
I don’t like the idea. Here’s an alternative I’d like to propose:
AI mentoring
After a user gets a post or comment rejected, have them be given the opportunity to rewrite and resubmit it with the help of an AI mentor. The AI mentor should be able to give reasonably accurate feedback, and won’t accept the revision until it is clearly above a quality line.
I don’t think this is currently easy to make (well), because I think it would be too hard to get current LLMs to be sufficiently accurate in LessWrong specific quality judgement and advice. If, at some point in the future, this became easy for the devs to add, I think it would be a good feature. Also, if an AI with this level of discernment were available, it could help the mods quite a bit in identifying edge cases and auto-resolving clear-cut cases.
I like it, it is worth a try because it could be very helpful if it works!
A possible objection is that “you can’t mentor others on something you suck yourself,” and this would require AGI capable of making valuable LessWrong comments themselves, which may be similarly hard to automating AI research (considering the math/programming advantages of LLMs).
This objection doesn’t doom your idea, because even if the AI is bad at writing valuable comments, and bad at judging valuable comments written by itself, it may be good at judging the failure modes where a human writes a bad comments. It could still work and is worth a try!