I think in most cases that a >5k karma user posts something that’s 100% AI, it’s better to let it through (though I expect I would strong downvote it).
Folks with 5k+ karma often have pretty interesting ideas, and I want to hear more of them. I am pretty into them trying to lower the activation energy required for them to post. Also, they’re unusually likely to develop ways of making non-slop AI writing
There’s also a matter of “standing”; I think that users who have contributed that much to the site should take some risky bets that cost LessWrong something and might payoff. To expand my model here: one of the moderators’ jobs, IMO, is to spare LW the cost of having to read bad stuff and downvote it to invisibility. If LW had to do all the filtering that moderators do, it would make LW much noisier and more unpleasant to use. But users who’ve contributed a bunch should be able to ask LW to make that judgement directly.
That said, I do expect I’d strong downvote. LLM text often contains propositions no human mind believes, and I’m happy to triage to avoid reading a bunch of sentences no one believes. But I could be wrong and if there’s a strong enough quality signal, I’d be happy to see that.
For example, consider Christian homeschoolers in the year 3000. I’ve not read it; I bounced off of it. Based on Buck’s description of his writing process, I think it’s quite likely it would have been automatically rejected. (Pangram currently only gives it an LLM score of 0.1, though). I think writers like Buck might like to try more experiments like that in the future, with even more LLM prose. My guess is that LW is better off for having that post on it than not.
I think the idea is that >5k karma users have karma to lose to punish them for posting low-quality content and it’s better to have humans make the judgement about what’s low-quality than AI.
I think in most cases that a >5k karma user posts something that’s 100% AI, it’s better to let it through (though I expect I would strong downvote it).
Why’s that? Sounds like you agree it’s a strong signal of low-quality / spammy content.
Folks with 5k+ karma often have pretty interesting ideas, and I want to hear more of them. I am pretty into them trying to lower the activation energy required for them to post. Also, they’re unusually likely to develop ways of making non-slop AI writing
There’s also a matter of “standing”; I think that users who have contributed that much to the site should take some risky bets that cost LessWrong something and might payoff. To expand my model here: one of the moderators’ jobs, IMO, is to spare LW the cost of having to read bad stuff and downvote it to invisibility. If LW had to do all the filtering that moderators do, it would make LW much noisier and more unpleasant to use. But users who’ve contributed a bunch should be able to ask LW to make that judgement directly.
That said, I do expect I’d strong downvote. LLM text often contains propositions no human mind believes, and I’m happy to triage to avoid reading a bunch of sentences no one believes. But I could be wrong and if there’s a strong enough quality signal, I’d be happy to see that.
For example, consider Christian homeschoolers in the year 3000. I’ve not read it; I bounced off of it. Based on Buck’s description of his writing process, I think it’s quite likely it would have been automatically rejected. (Pangram currently only gives it an LLM score of 0.1, though). I think writers like Buck might like to try more experiments like that in the future, with even more LLM prose. My guess is that LW is better off for having that post on it than not.
I think the idea is that >5k karma users have karma to lose to punish them for posting low-quality content and it’s better to have humans make the judgement about what’s low-quality than AI.