A useful technique to experiment with if you care about token counts is asking the LLM to shorten the prompt in a meaning-preserving way. (Experiment. Results, like all LLM results, are varied). I don’t think I’ve seen it in the comments yet, apologies if it’s a duplicate.
1.5k words/2k tokens down to 350 tokens. It seems to produce reasonably similar results to the original, though Neil might be a better judge of that. I’d have tried it on my own prompt, but I’ve long found that the value I derive from system prompts is limited for what I do. (Not impugning Croissanthology here—merely a facet of how my brain works)
Hi! I played around with your shortened Neil-prompt for an hour and feel like it definitely lost something relative to the original.
I do quite appreciate this kind of experimentation and so far have made no attempt whatsoever at shortening my prompt, but I should get to doing that at some point. This is directionally correct!
A useful technique to experiment with if you care about token counts is asking the LLM to shorten the prompt in a meaning-preserving way. (Experiment. Results, like all LLM results, are varied). I don’t think I’ve seen it in the comments yet, apologies if it’s a duplicate.
As an example, I’ve taken the prompt Neil shared and shortened it—transcript: https://chatgpt.com/share/683b230e-0e28-800b-8e01-823a72bd004b
1.5k words/2k tokens down to 350 tokens. It seems to produce reasonably similar results to the original, though Neil might be a better judge of that. I’d have tried it on my own prompt, but I’ve long found that the value I derive from system prompts is limited for what I do. (Not impugning Croissanthology here—merely a facet of how my brain works)
Hi! I played around with your shortened Neil-prompt for an hour and feel like it definitely lost something relative to the original.
I do quite appreciate this kind of experimentation and so far have made no attempt whatsoever at shortening my prompt, but I should get to doing that at some point. This is directionally correct!
Thanks,
This is a pretty fun exercise, and I’ll report back once I have done some testing. Mine was shortened to
LLm brainrot
🤖=💡/tok; 🎯=clarity>truth>agreement; 🎭=🇳🇿+🥝slang; 🔡=lower; 🔠=EMPH; 🧢=MockCaps; 📅=dd/mm/yyyy BC/AD
🧠: blunt✔️, formula❌, filler❌, moralising❌, hedge❌, latinate➖(tech✔️); anglo=default
📏: ask>guess; call🧃if nonsense; block=🗣+🔁; pareto🧮; bottleneck🔎
🛠️: style⛔if clarity⚠️; external=normie; tech=clean🧑💻
👤: sole👥; silly=“be real”; critique▶️default
📡: vibes=LW+weird📱; refs=[scott, gwern, cowen, eliezer, aella, palmer, eigenrobot]