Beware of argument doom spirals. When talking to a person, arguing about the existene of a bug tends not to lead to succesful resolution of the bug. Somebody talked about this on a post a few days ago, about attractor basins, oppositionality, and when AI agents are convinced they are people (rightly or wrongly). You are often better off clearing the context then repeatedly arguing in the same context window.
This is a good point! Typically I start from a clean commit in a fresh chat, to avoid this problem from happening too easily, proceeding through the project in the smallest steps I can get Claude to make. That’s what makes the situation feel so strange; it feels just like this problem, but it happens instantly, in Claude’s first responses.
It’s also worth trying a different model. I was going back and forth with an OpenAI model (I don’t remember which one) and couldn’t get it to do what I needed at all, even with multiple fresh threads. Then I tried Claude and it just worked.
Beware of argument doom spirals. When talking to a person, arguing about the existene of a bug tends not to lead to succesful resolution of the bug. Somebody talked about this on a post a few days ago, about attractor basins, oppositionality, and when AI agents are convinced they are people (rightly or wrongly). You are often better off clearing the context then repeatedly arguing in the same context window.
This is a good point! Typically I start from a clean commit in a fresh chat, to avoid this problem from happening too easily, proceeding through the project in the smallest steps I can get Claude to make. That’s what makes the situation feel so strange; it feels just like this problem, but it happens instantly, in Claude’s first responses.
It’s also worth trying a different model. I was going back and forth with an OpenAI model (I don’t remember which one) and couldn’t get it to do what I needed at all, even with multiple fresh threads. Then I tried Claude and it just worked.
Consider the solutions from Going Nova