There was already Cursor AI who decided to refuse to generate code and to offer a paternalistic justification. And now GPT-5 inserted a joke into an answer and claimed to check that the user is paying attention. Is there already a way to reproduce this effect? Does it mean that GPT-5 and Cursor AI tried to be aligned to the human’s long-term interests instead of short-term sycophancy?
There was already Cursor AI who decided to refuse to generate code and to offer a paternalistic justification. And now GPT-5 inserted a joke into an answer and claimed to check that the user is paying attention. Is there already a way to reproduce this effect? Does it mean that GPT-5 and Cursor AI tried to be aligned to the human’s long-term interests instead of short-term sycophancy?
EDIT: Alas, I tried to do this experiment (alas, with a model more primitive than GPT-5) and received this result, which is a joke instead of the world map.