When using LLM-based coding assistants, I always had a strange feeling about the interaction. I think I now have a pointer around that feeling—disappointment from having expected more (again and again), followed by low level of disgust, and an aftertaste of disrespect growing into hatred
Yeah, I kinda get it. Not to the point of hatred, but I do find interacting with LLMs… mentally taxing. They pass as just enough of a “well-meaning eagerly helpful person” to make me not want to be mean to them (as it’d make me feel bad), but they also continually induce weary disappointment in me.
I wish we figured out some other interface over the base models that is not these “AI assistant” personas. I don’t know what that’d be, but surely something better is possible. Something framed as an impersonal call to a database equipped with a powerful program/knowledge synthesis tool, maybe.
Have you seen how people un-minified Claude Code—the sheer amount of workarounds, cringe IMPORTANT inside the system prompt and the constant reminders?
This prompted me to write up about my recent experience with it, see here.
Yeah, I kinda get it. Not to the point of hatred, but I do find interacting with LLMs… mentally taxing. They pass as just enough of a “well-meaning eagerly helpful person” to make me not want to be mean to them (as it’d make me feel bad), but they also continually induce weary disappointment in me.
I wish we figured out some other interface over the base models that is not these “AI assistant” personas. I don’t know what that’d be, but surely something better is possible. Something framed as an impersonal call to a database equipped with a powerful program/knowledge synthesis tool, maybe.
This prompted me to write up about my recent experience with it, see here.