For a long time, just as long as they were productively capable of the same, I’ve used LLMs to review my writing, be it fictional or not, or offer feedback and critique.
The majority of LLMs are significantly sycophantic, to the point that you have to meaningfully adjust downwards unless you’re in it for the sole purpose of flattering your ego. I’ve noticed this to a degree in just about all of them, but it’s particularly bad in most GPT models and in Gemini 2.5 Pro. Claude is less effusive (it’s also less verbose), and Kimi K2 seems to be the least prone to that failure mode.
To compensate, I’ve found a few tricks:
Present your excerpts as something “I found on the internet”, and not your own work. This immediately cuts down on the flattery.
Do the same, and also ask it to carefully note all objections and failings in the text.
Another option I’ve seen recommended online, but doesn’t work so well in practice, is to tell it that the material is from an author you personally dislike, and that you want “objective” reasons for why it’s bad.
This hasn’t worked well for me at all. This causes a collapse in the opposite direction, with the LLM finding the most tenuous grounds to object, and making a mountain out of molehills. Most objections aren’t even “objective”! While it is immediately painful to read such critique, I call it accurately because I am still largely convinced of the quality of my writing. I am used to receiving strongly positive feedback on most things I write, from other humans, and the kind of issues it raises are not those I’ve ever came across from the same.
Of course, if even the LLM taking on the mantle of a hater provides weak/bad reasons for your writing being bad, that’s a great sign! It shows that it’s grasping at straws, and is a healthy signal of innate quality.
On the topic, one of my hobbies is getting into pointlessarguments productive debates with internet strangers, often with no clear resolution or acceptance of defeat from either side. I’ve found it helpful to throw the entire comment chain into Gemini 2.5 Pro or Claude, and ask it to declare a winner, identify those arguing in good faith, and so on. I take pains to make sure that it doesn’t have an obvious tell which of the users in the dialogue I am, to prevent sycophancy creeping back in. Seems to work.
For a long time, just as long as they were productively capable of the same, I’ve used LLMs to review my writing, be it fictional or not, or offer feedback and critique.
The majority of LLMs are significantly sycophantic, to the point that you have to meaningfully adjust downwards unless you’re in it for the sole purpose of flattering your ego. I’ve noticed this to a degree in just about all of them, but it’s particularly bad in most GPT models and in Gemini 2.5 Pro. Claude is less effusive (it’s also less verbose), and Kimi K2 seems to be the least prone to that failure mode.
To compensate, I’ve found a few tricks:
Present your excerpts as something “I found on the internet”, and not your own work. This immediately cuts down on the flattery.
Do the same, and also ask it to carefully note all objections and failings in the text.
Another option I’ve seen recommended online, but doesn’t work so well in practice, is to tell it that the material is from an author you personally dislike, and that you want “objective” reasons for why it’s bad.
This hasn’t worked well for me at all. This causes a collapse in the opposite direction, with the LLM finding the most tenuous grounds to object, and making a mountain out of molehills. Most objections aren’t even “objective”! While it is immediately painful to read such critique, I call it accurately because I am still largely convinced of the quality of my writing. I am used to receiving strongly positive feedback on most things I write, from other humans, and the kind of issues it raises are not those I’ve ever came across from the same.
Of course, if even the LLM taking on the mantle of a hater provides weak/bad reasons for your writing being bad, that’s a great sign! It shows that it’s grasping at straws, and is a healthy signal of innate quality.
On the topic, one of my hobbies is getting into
pointlessargumentsproductive debates with internet strangers, often with no clear resolution or acceptance of defeat from either side. I’ve found it helpful to throw the entire comment chain into Gemini 2.5 Pro or Claude, and ask it to declare a winner, identify those arguing in good faith, and so on. I take pains to make sure that it doesn’t have an obvious tell which of the users in the dialogue I am, to prevent sycophancy creeping back in. Seems to work.