Who knows how much this is biased by AI slop taste, but, your AI comment feels kinda contentless to me in a way your original one doesn’t.
“What you want is a signal of epistemic quality.” Well, yeah, no shit. That’s a very difficult problem that it glosses over.
Features of your original comment that make it more interesting, apart from me just kinda barfing at the writing style which I’ll try to ignore:
“quality indicator for things that don’t have many votes yet”
why you want junior MATS-scholar contribution (with massive AI assistance in writing) and don’t want an outside contribution (with massive AI assistance in writing)
Both of those highlight gears of the problem that help me think about it. And then, there’s something like “I’m confident Dagon actually believes this is the shape of the problem” that is somehow helpful for feeling like I’m having a real conversation where I expect us to be jointly improving our models.
Things I actively dislike about the about the AI one:
“it’s compliance theater that honest users follow and bad actors ignore. More tractable approaches: reputation systems, structured epistemic standards,”
I think the first sentence is just false (we get to enforce it on bad actors and establish norms), and the “reputation systems” and “structured epistemic standards” are like, well, figuring out how to do that is whole problem.
It seems like the problem you’re articulating has to do with the fact that LessWrong functions partly as a training ground for rational thinking and AI alignment research. In the past, you’ve lowered the epistemic bar for student-tier content, because receiving feedback on their content motivates them and provides useful feedback to them.
But soon, slop-posters may meet or exceed the epistemic bar achieved by students. Judging it on a pure minimum quality standard would admit a tidal wave of useless, inert slop. It’s not good enough to contribute to the leading edge of the conversation, and it doesn’t benefit from feedback. It just drowns out the authentic student content, which really does benefit from community input.
If that accurately reflects the problem you’re concerned about, then one possibility is to enforce an escalating minimum quality bar that exceeds both the quality of AI slop and the current minimum standards of LessWrong today. Anybody can freely post, but if moderators don’t feel the quality is excellent, then it does not get any visibility on LessWrong.
Simultaneously, create a separate submission channel for “student contributions” where quality standards are lower, but the primary rationale is that the poster would benefit from community feedback to support their intellectual growth.
This strategy would introduce the question of how to set standards and vet submissions through the student channel. This may take quite a bit of thinking to strike the right balance between effectively filtering for what you want and not taking undue amounts of moderation effort. Some possible aspects of such filtering might include:
Making default acceptance of submissions through the student channel a benefit of participating in real-world community activities, such as Inkhaven, MATS, workshops, and so on. This functions as a costly signal of a person who’s earnestly trying to use these resources to improve their capabilities. When people register for these events, they can register their account username as well, which labels that account as being permitted to make “student submissions.”
Posting a sufficient amount of excellent content might also enable people to post through the “student submissions” channel.
Giving the community a way to flag posts submitted through the student channel that feel AI-corrupted, with a low threshold for those posts being taken down automatically due to being flagged.
We may generally want to raise our quality bar. But, fwiw I don’t know that we’ve actually lowered the bar for student tier content. Or, idk maybe we do, but, I don’t think MATS scholars are particularly below our bar (depends on the scholar/project). Just because they’re not contributing frontier conceptual progress doesn’t mean they’re not, like, exploring an interesting corner of the world and writing up some useful stuff about it.
I mention MATS scholars because their work is structurally similar to the current generation of slop (i.e. it sorta looks like the slop is imitating entry-level mechinterp work in particular).
I don’t know that we’ve actually lowered the bar for student tier content.
Fair! Let me rephrase. LW may have historically set its quality floor low enough to permit student-tier content, because even if it’s of minor interest, it has a beneficial side effect in promoting the author’s intellectual growth and potential to contribute in a more substantial way in the future. Most content above the current quality floor reflects enough content interest and growth value to be worth accepting.
When AI slop consistently rises above that quality floor, then including it will fill LW with minor-interest slop that has no beneficial side effect of intellectual growth for the contributor. So it will become untenable to keep the floor in the same place. But LW still wants to give student-tier contributors a way to make those contributions without getting drowned out by AI slop or filtered out by a quality bar that has to keep rising to filter out the slop. The strategies I proposed are implementation ideas for that alternate submissions channel.
Is this a fair description of the problem, as you see it?
Who knows how much this is biased by AI slop taste, but, your AI comment feels kinda contentless to me in a way your original one doesn’t.
“What you want is a signal of epistemic quality.” Well, yeah, no shit. That’s a very difficult problem that it glosses over.
Features of your original comment that make it more interesting, apart from me just kinda barfing at the writing style which I’ll try to ignore:
“quality indicator for things that don’t have many votes yet”
why you want junior MATS-scholar contribution (with massive AI assistance in writing) and don’t want an outside contribution (with massive AI assistance in writing)
Both of those highlight gears of the problem that help me think about it. And then, there’s something like “I’m confident Dagon actually believes this is the shape of the problem” that is somehow helpful for feeling like I’m having a real conversation where I expect us to be jointly improving our models.
Things I actively dislike about the about the AI one:
“it’s compliance theater that honest users follow and bad actors ignore. More tractable approaches: reputation systems, structured epistemic standards,”
I think the first sentence is just false (we get to enforce it on bad actors and establish norms), and the “reputation systems” and “structured epistemic standards” are like, well, figuring out how to do that is whole problem.
It seems like the problem you’re articulating has to do with the fact that LessWrong functions partly as a training ground for rational thinking and AI alignment research. In the past, you’ve lowered the epistemic bar for student-tier content, because receiving feedback on their content motivates them and provides useful feedback to them.
But soon, slop-posters may meet or exceed the epistemic bar achieved by students. Judging it on a pure minimum quality standard would admit a tidal wave of useless, inert slop. It’s not good enough to contribute to the leading edge of the conversation, and it doesn’t benefit from feedback. It just drowns out the authentic student content, which really does benefit from community input.
If that accurately reflects the problem you’re concerned about, then one possibility is to enforce an escalating minimum quality bar that exceeds both the quality of AI slop and the current minimum standards of LessWrong today. Anybody can freely post, but if moderators don’t feel the quality is excellent, then it does not get any visibility on LessWrong.
Simultaneously, create a separate submission channel for “student contributions” where quality standards are lower, but the primary rationale is that the poster would benefit from community feedback to support their intellectual growth.
This strategy would introduce the question of how to set standards and vet submissions through the student channel. This may take quite a bit of thinking to strike the right balance between effectively filtering for what you want and not taking undue amounts of moderation effort. Some possible aspects of such filtering might include:
Making default acceptance of submissions through the student channel a benefit of participating in real-world community activities, such as Inkhaven, MATS, workshops, and so on. This functions as a costly signal of a person who’s earnestly trying to use these resources to improve their capabilities. When people register for these events, they can register their account username as well, which labels that account as being permitted to make “student submissions.”
Posting a sufficient amount of excellent content might also enable people to post through the “student submissions” channel.
Giving the community a way to flag posts submitted through the student channel that feel AI-corrupted, with a low threshold for those posts being taken down automatically due to being flagged.
We may generally want to raise our quality bar. But, fwiw I don’t know that we’ve actually lowered the bar for student tier content. Or, idk maybe we do, but, I don’t think MATS scholars are particularly below our bar (depends on the scholar/project). Just because they’re not contributing frontier conceptual progress doesn’t mean they’re not, like, exploring an interesting corner of the world and writing up some useful stuff about it.
I mention MATS scholars because their work is structurally similar to the current generation of slop (i.e. it sorta looks like the slop is imitating entry-level mechinterp work in particular).
Fair! Let me rephrase. LW may have historically set its quality floor low enough to permit student-tier content, because even if it’s of minor interest, it has a beneficial side effect in promoting the author’s intellectual growth and potential to contribute in a more substantial way in the future. Most content above the current quality floor reflects enough content interest and growth value to be worth accepting.
When AI slop consistently rises above that quality floor, then including it will fill LW with minor-interest slop that has no beneficial side effect of intellectual growth for the contributor. So it will become untenable to keep the floor in the same place. But LW still wants to give student-tier contributors a way to make those contributions without getting drowned out by AI slop or filtered out by a quality bar that has to keep rising to filter out the slop. The strategies I proposed are implementation ideas for that alternate submissions channel.
Is this a fair description of the problem, as you see it?
Yeah that framing seems plausible, would have to think more.