I agree it’ll get harder to validate, but I think having something like this policy is, like, a prerequisite (or at least helpful grounding) for the mindset change.
I understand the motif behind the policy change but its unenforceable and carry no sanctions. In 12-24 months I guess it will be very difficult (impossible) to detect AI spamming. The floodgates are open and you can only appeal to peoples willingness to have a real human to human conversation. But perhaps those conversations are not as interesting as talking to an AI? Those who seek peer validation for their cleverness will use all available tools in doing so no matter what policy there is.
I mean, the sanctions are ‘if we think your content looks LLM generated, we’ll reject it and/or give a warning and/or eventually delete or ban.’ We do this for several users a day.
That may get harder someday but it’s certainly not unenforceable now.
Yes, but as I wrote in the answer to habryka (see below), I am not talking about the present moment. I am concerned with the (near) future. With the break neck speed at which AI is moving it wont be long until it will be hopeless to figure out if its AI generated or not.
So my point and rhetorical question is this: AI is not going to go away. Everyone(!) will use it, all day every day. So instead of trying to come up with arbitrary formulas for how much AI generated content a post can or cannot contain, how can we use AI to the absolute limit to increase the quality of posts and make Lesswrong even better than it already is?!
I think you are underestimating the degree to which contribution to LessWrong is mostly done by people who have engaged with each other a lot. We review all posts from new users before they go live. We can handle more submissions, and lying to the moderators about your content being AI written is not going to work for that many iterations. And with that policy, if we find out you violated the content policies, we feel comfortable banning you.
I know the extremely hard work that a lot of people put into writing their posts, and that the moderators are doing a fantastic job at keeping the standards very high, all of which is much appreciated. Bravo!
But I assume that this policy change is forward looking and that is what I am talking about, the future. We are at the beginning of something truly spectacular that have already yielded results in certain domains that are nothing less than mind blowing. Text generation is one of those fields which have had extreme progress in just a few years time. If this progress continue (which is likely to assume), very soon text generation will be as good or better than the best human writers in pretty much any field.
How do you as moderators expect to keep up with this progress if you want to keep the forum “AI free”? Is there anything more concrete than a mere policy change that could be done to nudge people into NOT posting AI generated content? IMHO Lesswrong is a competition in cleaver ideas and smartness, and I think a fair assumption is that if you can get help from AI to reach “Yudkowsky-level” smartness, you will use it no matter what. Its just like when say athletes use PEDs to get an edge. Winning >> Policies
I agree it’ll get harder to validate, but I think having something like this policy is, like, a prerequisite (or at least helpful grounding) for the mindset change.
I understand the motif behind the policy change but its unenforceable and carry no sanctions. In 12-24 months I guess it will be very difficult (impossible) to detect AI spamming. The floodgates are open and you can only appeal to peoples willingness to have a real human to human conversation. But perhaps those conversations are not as interesting as talking to an AI? Those who seek peer validation for their cleverness will use all available tools in doing so no matter what policy there is.
I mean, the sanctions are ‘if we think your content looks LLM generated, we’ll reject it and/or give a warning and/or eventually delete or ban.’ We do this for several users a day.
That may get harder someday but it’s certainly not unenforceable now.
Yes, but as I wrote in the answer to habryka (see below), I am not talking about the present moment. I am concerned with the (near) future. With the break neck speed at which AI is moving it wont be long until it will be hopeless to figure out if its AI generated or not.
So my point and rhetorical question is this: AI is not going to go away. Everyone(!) will use it, all day every day. So instead of trying to come up with arbitrary formulas for how much AI generated content a post can or cannot contain, how can we use AI to the absolute limit to increase the quality of posts and make Lesswrong even better than it already is?!
Or: when the current policy stops making sense, we can figure out a new policy.
In particular, when the current policy stops making sense, AI moderation tools may also be more powerful and can enable a wider range of policies.
I think you are underestimating the degree to which contribution to LessWrong is mostly done by people who have engaged with each other a lot. We review all posts from new users before they go live. We can handle more submissions, and lying to the moderators about your content being AI written is not going to work for that many iterations. And with that policy, if we find out you violated the content policies, we feel comfortable banning you.
I know the extremely hard work that a lot of people put into writing their posts, and that the moderators are doing a fantastic job at keeping the standards very high, all of which is much appreciated. Bravo!
But I assume that this policy change is forward looking and that is what I am talking about, the future. We are at the beginning of something truly spectacular that have already yielded results in certain domains that are nothing less than mind blowing. Text generation is one of those fields which have had extreme progress in just a few years time. If this progress continue (which is likely to assume), very soon text generation will be as good or better than the best human writers in pretty much any field.
How do you as moderators expect to keep up with this progress if you want to keep the forum “AI free”? Is there anything more concrete than a mere policy change that could be done to nudge people into NOT posting AI generated content? IMHO Lesswrong is a competition in cleaver ideas and smartness, and I think a fair assumption is that if you can get help from AI to reach “Yudkowsky-level” smartness, you will use it no matter what. Its just like when say athletes use PEDs to get an edge. Winning >> Policies