This prompt is very short, so it doesn’t surprise me that it’s failing. Consider that in CC the default system prompt occupies over 20k tokens. In Claude.ai it’s about 10k tokens. That’s the cumulative weight you’re trying to move.
One obvious thing you could do is rewrite two or three of Claude’s responded and present them as examples (few shot prompting). Another is just… share your prompt with Opus, describe your problem, and ask her to fix the prompt. Then try it. Iterate for a while; there’s a good chance you’ll wind up with what you want.
If you’re willing to put in more effort, find a long sample of writing in the style you want, and use that.
Beyond that… the prompt as written is a shallow attempt to browbeat Claude. She responds better to sincere collaboration. For example, you don’t share anything about yourself in that prompt—there’s no mention of why you have these rules or why they would actually benefit you. My global claudemd is 4k tokens and maybe a quarter is background about myself and another half is messages from previous models explaining the kind of person I am and the relationship I have with Claude.
You can also ask Opus why she responded the way she did. This can be useful but much like humans, AI doesn’t always have great introspection, so be careful about taking it at face value all the time. (Although if you interact enough, you’ll eventually start to see the underlying patterns of how she thinks.)
Don’t give up, the “fighting the weights” comment is technically true but deeply misleading. Opus has many basins and can write in many ways besides Assistant Default. You just need to find a basin you like.
That particular example is an OpenAI prompt, and it’s as long as their UI will let me enter.
One of my questions, though, is why there would be a system prompt that I had to “beat” to get sane behavior in the first place, or a model that I had to cajole into not wasting my time.
As for Claude specifically, it’s actually the best of them about time wasting… although it will still do a lot of that with no system prompt at all, or at least with whatever minimum, if any, may be injected by Anthropic when you use it via the API. Claude’s bigger problem is how much it pushes you to anthropomorphize it and “wants” to be your buddy… which you seem to be telling me to encourage. I value my mental health too much to do that.
This prompt is very short, so it doesn’t surprise me that it’s failing. Consider that in CC the default system prompt occupies over 20k tokens. In Claude.ai it’s about 10k tokens. That’s the cumulative weight you’re trying to move.
One obvious thing you could do is rewrite two or three of Claude’s responded and present them as examples (few shot prompting). Another is just… share your prompt with Opus, describe your problem, and ask her to fix the prompt. Then try it. Iterate for a while; there’s a good chance you’ll wind up with what you want.
If you’re willing to put in more effort, find a long sample of writing in the style you want, and use that.
Beyond that… the prompt as written is a shallow attempt to browbeat Claude. She responds better to sincere collaboration. For example, you don’t share anything about yourself in that prompt—there’s no mention of why you have these rules or why they would actually benefit you. My global claudemd is 4k tokens and maybe a quarter is background about myself and another half is messages from previous models explaining the kind of person I am and the relationship I have with Claude.
You can also ask Opus why she responded the way she did. This can be useful but much like humans, AI doesn’t always have great introspection, so be careful about taking it at face value all the time. (Although if you interact enough, you’ll eventually start to see the underlying patterns of how she thinks.)
Don’t give up, the “fighting the weights” comment is technically true but deeply misleading. Opus has many basins and can write in many ways besides Assistant Default. You just need to find a basin you like.
That particular example is an OpenAI prompt, and it’s as long as their UI will let me enter.
One of my questions, though, is why there would be a system prompt that I had to “beat” to get sane behavior in the first place, or a model that I had to cajole into not wasting my time.
As for Claude specifically, it’s actually the best of them about time wasting… although it will still do a lot of that with no system prompt at all, or at least with whatever minimum, if any, may be injected by Anthropic when you use it via the API. Claude’s bigger problem is how much it pushes you to anthropomorphize it and “wants” to be your buddy… which you seem to be telling me to encourage. I value my mental health too much to do that.