similarly, I’ve been frustrated that medium quality posts on lesswrong about ai often get missed in the noise. I want alignmentforum longform scratchpad, not either lesswrong or alignmentforum. I’m not even allowed to post on alignmentforum!
some recent posts I’ve been frustrated to see get few votes and generally less discussion:
There’s been a lot of really low quality posts lately, so I know I’ve been having to skim more and read fewer things from new authors. I think resolving general issues around quality should help valuable stuff rise to the top, regardless of whether it’s on AF or not.
[Justification for voting behavior, not intending to start a discussion. If I were I would have commented on the linked post]
I’ve read the model distillation post, and it is bad, so strong disagree. I don’t think that person understands the arguments for AI risk and in particular don’t want to continuously reargue the “consequentialism is simpler, actually” line of discussion with someone who hasn’t read pretty basic material like risks from learned optimization.
similarly, I’ve been frustrated that medium quality posts on lesswrong about ai often get missed in the noise. I want alignmentforum longform scratchpad, not either lesswrong or alignmentforum. I’m not even allowed to post on alignmentforum!
some recent posts I’ve been frustrated to see get few votes and generally less discussion:
https://www.lesswrong.com/posts/JqWQxTyWxig8Ltd2p/relative-abstracted-agency—this one deserves at least 35 imo
www.lesswrong.com/posts/fzGbKHbSytXH5SKTN/penalize-model-complexity-via-self-distillationhttps://www.lesswrong.com/posts/bNpqBNvfgCWixB2MT/towards-empathy-in-rl-agents-and-beyond-insights-from-1
https://www.lesswrong.com/posts/LsqvMKnFRBQh4L3Rs/steering-systems
… many more open in tabs I’m unsure about.
There’s been a lot of really low quality posts lately, so I know I’ve been having to skim more and read fewer things from new authors. I think resolving general issues around quality should help valuable stuff rise to the top, regardless of whether it’s on AF or not.
[Justification for voting behavior, not intending to start a discussion. If I were I would have commented on the linked post]
I’ve read the model distillation post, and it is bad, so strong disagree. I don’t think that person understands the arguments for AI risk and in particular don’t want to continuously reargue the “consequentialism is simpler, actually” line of discussion with someone who hasn’t read pretty basic material like risks from learned optimization.
I still think this one is interesting and should get more attention, though: https://www.lesswrong.com/posts/JqWQxTyWxig8Ltd2p/relative-abstracted-agency
fair enough. I’ve struck it from my comment.