It doesn’t seem to be an uncommon experience that a model is given e.g. a piece of code with a bug in it and asked to find the bug, and then it keeps repeatedly claiming that it “found” the bug and offering revised code which doesn’t actually fix the problem. Or have anything to do with the problem, for that matter.
My personal insanity generator. Sigh.
Idk if I’m the only one here, but I use LLMs for coding and I have disabled memory. This wasn’t really an educated move, but after having an AI completely hallucinate in one chat and get the problem or the code at hand totally wrong, I’m afraid its misunderstanding will contaminate its memory and mess up every new chat I start.
At least with no memory I can attempt to rewrite my prompt in a new window and hope for a better outcome. It forces me to repeat myself a lot, but I now have systems in place to briefly summarize what my app does, show the file structure, and explain the problem at hand.