Unless they’re bad at prompting or just aren’t going to go run the prompt. I’m considering writing an article where the actual point are the prompts, and if a human reads the prompts and takes only the prompts away then that’s actually just fine, but where LLM output might clarify as a cheat sheet if someone finds the prompts inscrutable. I’m not sure how this will be received, I suspect the allergy to LLM writing is so strong that even doing that will annoy people and get strong downvotes. Maybe I’ll only include the prompts. But I think my user preferences prompt produces better output than other people will get if they rerun the prompts, and I’m not going to share my user preferences publicly.
What about sharing the parts of your user prompt that you think are most important, removing information where you want? I do suspect that a good user prompt is very helpful in getting non-BS answers from LLMs. I have to change mine when model versions change their level of sycophancy.
I guess it will be fine if you directly describe what you’re doing and you make it easy to only read the prompts / skim the LLM outputs.
Because few people share their LLM prompts when posting LLM writing, people are not fed up by the exercise. If anything it sounds like a “small fun experiment”.
Unless they’re bad at prompting or just aren’t going to go run the prompt. I’m considering writing an article where the actual point are the prompts, and if a human reads the prompts and takes only the prompts away then that’s actually just fine, but where LLM output might clarify as a cheat sheet if someone finds the prompts inscrutable. I’m not sure how this will be received, I suspect the allergy to LLM writing is so strong that even doing that will annoy people and get strong downvotes. Maybe I’ll only include the prompts. But I think my user preferences prompt produces better output than other people will get if they rerun the prompts, and I’m not going to share my user preferences publicly.
This does seem valuable.
I’d put the responses in collapsible boxes.
What about sharing the parts of your user prompt that you think are most important, removing information where you want? I do suspect that a good user prompt is very helpful in getting non-BS answers from LLMs. I have to change mine when model versions change their level of sycophancy.
I guess it will be fine if you directly describe what you’re doing and you make it easy to only read the prompts / skim the LLM outputs. Because few people share their LLM prompts when posting LLM writing, people are not fed up by the exercise. If anything it sounds like a “small fun experiment”.