A useful technique to experiment with if you care about token counts is asking the LLM to shorten the prompt in a meaning-preserving way. (Experiment. Results, like all LLM results, are varied). I don’t think I’ve seen it in the comments yet, apologies if it’s a duplicate.
1.5k words/2k tokens down to 350 tokens. It seems to produce reasonably similar results to the original, though Neil might be a better judge of that. I’d have tried it on my own prompt, but I’ve long found that the value I derive from system prompts is limited for what I do. (Not impugning Croissanthology here—merely a facet of how my brain works)
A useful technique to experiment with if you care about token counts is asking the LLM to shorten the prompt in a meaning-preserving way. (Experiment. Results, like all LLM results, are varied). I don’t think I’ve seen it in the comments yet, apologies if it’s a duplicate.
As an example, I’ve taken the prompt Neil shared and shortened it—transcript: https://chatgpt.com/share/683b230e-0e28-800b-8e01-823a72bd004b
1.5k words/2k tokens down to 350 tokens. It seems to produce reasonably similar results to the original, though Neil might be a better judge of that. I’d have tried it on my own prompt, but I’ve long found that the value I derive from system prompts is limited for what I do. (Not impugning Croissanthology here—merely a facet of how my brain works)