I think that Slop could be a social problem (i.e. there are some communities that can’t tell slop from better content) , but I’m having a harder time imagining it being a technical problem.
I have a hard time imagining a type of Slop that isn’t low in information. All the kinds of Slop I’m familiar with is basically, “small variations on some ideas, which hold very little informational value.”
It seems like models like o1 / r1 are trained by finding ways to make information-dense AI-generated data. I expect that trend to continue. If AIs for some reason experience some “slop thresh-hold”, I don’t see how they get much further by using generated data.
I think that Slop could be a social problem (i.e. there are some communities that can’t tell slop from better content) , but I’m having a harder time imagining it being a technical problem.
I have a hard time imagining a type of Slop that isn’t low in information. All the kinds of Slop I’m familiar with is basically, “small variations on some ideas, which hold very little informational value.”
It seems like models like o1 / r1 are trained by finding ways to make information-dense AI-generated data. I expect that trend to continue. If AIs for some reason experience some “slop thresh-hold”, I don’t see how they get much further by using generated data.