We get easily like 4-5 LLM-written post submissions a day these days. They are very evidently much worse than the non-LLM written submissions. We sometimes fail to catch one, and then people complain: https://www.lesswrong.com/posts/PHJ5NGKQwmAPEioZB/the-unearned-privilege-we-rarely-discuss-cognitive?commentId=tnFoenHqjGQw28FdY
Yeah, but how do you know that no one managed to sneak one past both you and the commentators?
Also, there’s an art to this.
If there are models that are that much better than SOTA models, would they be posting to LW? Seems unlikely—but if so, and they generate good enough content, that seems mostly fine, albeit deeply concerning on the secretly-more-capable-models front.
We get easily like 4-5 LLM-written post submissions a day these days. They are very evidently much worse than the non-LLM written submissions. We sometimes fail to catch one, and then people complain: https://www.lesswrong.com/posts/PHJ5NGKQwmAPEioZB/the-unearned-privilege-we-rarely-discuss-cognitive?commentId=tnFoenHqjGQw28FdY
Yeah, but how do you know that no one managed to sneak one past both you and the commentators?
Also, there’s an art to this.
If there are models that are that much better than SOTA models, would they be posting to LW? Seems unlikely—but if so, and they generate good enough content, that seems mostly fine, albeit deeply concerning on the secretly-more-capable-models front.