Do you have a sense of how articles end up getting flagged as “LLM-generated” or “heavily-reliant on an LLM”? A friend of mine wrote a post recently that was rejected with that as the reason even though they absolutely did not use an LLM. (Okay, fine, that friend is me). Is it just the quality of the ideas that trigger the red flags or are there reliable style-indicators?
I love reading AI articles and thought pieces, but I rarely use LLMs in my day job, so I’m not quite sure what style I should be avoiding....
Do you have a sense of how articles end up getting flagged as “LLM-generated” or “heavily-reliant on an LLM”? A friend of mine wrote a post recently that was rejected with that as the reason even though they absolutely did not use an LLM. (Okay, fine, that friend is me). Is it just the quality of the ideas that trigger the red flags or are there reliable style-indicators?
I love reading AI articles and thought pieces, but I rarely use LLMs in my day job, so I’m not quite sure what style I should be avoiding....