Quick and incomplete roundup of LLM prompting practices I regularly use—feel free to suggest your own or suggest improvements:
-Try asking it to answer “in one sentence”. It won’t always sufficiently compress the topic, but if it does. Well… you saved yourself a lot of time.
-Don’t use negatives or say “exclude”… wait… I mean: state something in harmony with your wishes because unnecessarily making mentions to exclusions may inadvertently be ‘amplified’ even though you explicitly asked to exclude them.
-Beware hallucinations and Gell-Man Amnesia: Do a basic epistemic sanity check—ask in a separate conversation session if it actually knows anything about the topic you’re inquiring. For example, let’s say I am a defector from Ruritania and I ask the LLM to tell me about it’s King, whom I know to be a brutal tyrant, but it repeats back just glowing details from the propaganda… well then how can I expect it to generate accurate results? ”If you ask a good LLM for definitions of terms with strong, well established meanings you’re going to get great results almost every time.”—you can expect it to give a good response for any sufficiently popular topic which has a widespread consensus.
-To avoid unbridled sycophancy, always say your writing or idea is actually that of a friend, a colleague, or something you found on a blog. However be careful to use neutral language never the less—least it simply follows your lead in assuming it’s good, or bad.
-When I need a summary of something, I ask Claude for “a concise paraphrase in the style of hemmingway”. Sometimes it’s aesthetic choices are a bit jarring, but it does ensure that it shifts around the sentence structures and even the choice of words. Also it just reads pithier which I like.
-Do agonize over key verbs: just today I used two variants of a maybe 100 word prompt one was “what do I need to learn to start...” and one was “what do I need to learn to start monetizing...”—really everything else about the prompt was the same. But they produced two very different flavors of response. One suggesting training and mentorship, one suggesting actual outputs. The changes were small but completely change the trajectory of the reply.
-Conceptually think about the LLM as an amplifier rather than an assistant in practice this requires the LLM having some context about your volition and the current state of affairs so that it has some idea of what to shift towards.
-If you still don’t understand a reply to a confalutin doubledutch fancy pants topic—even after prompting it to “ELI5″. Start a new conversation and ask it to answer as Homer Simpson. The character probably doesn’t matter, it’s just that he’s a sufficiently mainstream and low-brow character that both ChatGPT and Claude will dumb down whatever the topic is to a level I can understand. It is very cringe though the way it chronically stereotypes him.
-Write in the style of the response you want. Since it is an amplifier it will mimic what it is provided. The heavier you slather on the style, the more it will mimic. To do—see if writing in sheer parody of a given style helps or hinders replies
-As a reminder to myself: if you don’t get the reply you wanted, usually your prompt was wrong. Yes sometimes they are censored or there’s biases. But it’s not intentionally trying to thwart you—it can’t even intuit your intentions. If the reply isn’t what you wanted—your expectations were off and that was reflected in the way you wrote your prompt.
-Claude let’s you use XML tags and suggests putting instructions at the bottom, not the top
-Don’t ask it to “avoid this error” when coding—it will just put in a conditional statement that exits the routine. You need to figure out the cause of it yourself then maybe you can instruct it to write something to fix what ever you’ve diagnosed as the cause.
-When you are debugging an error or diagnosing a fault in something it will always try to offer the standard “have you tried turning it off and on” again suggestions. Instead prompt it to help you identify and diagnose causes without posing a solution. And give it as much context as you can. Don’t expect it to magically figure out the cause—tell it your hunches and your guesses, even if you’re not sure you’re right. The important part is don’t frame it as “how do I fix this?” ask it “what is happening that causes this?” THEN later you can ask it how to fix it.
-When debugging or diagnosing, also tell it what you previously tried—but be at pains to explain why it doesn’t work. Sometimes it ignores this and will tell you to do the thing you’ve already tried because that’s what the knowledge base says to do… but if you don’t, then like any person, it can’t help you diagnose the cause.
-When asking for an exegesis of a section of Kant’s CPR and you want a term to be explained to you, make sure to add “in the context of the section” or “as used by Kant”. For example, “Intuition” if you ask for a definition it might defer to a common English sense, rather than the very specific way it is used to translate Anschauung. This expands, obviously to any exegesis of anyone.
Quick and incomplete roundup of LLM prompting practices I regularly use—feel free to suggest your own or suggest improvements:
-Try asking it to answer “in one sentence”. It won’t always sufficiently compress the topic, but if it does. Well… you saved yourself a lot of time.
-Don’t use negatives or say “exclude”… wait… I mean: state something in harmony with your wishes because unnecessarily making mentions to exclusions may inadvertently be ‘amplified’ even though you explicitly asked to exclude them.
-Beware hallucinations and Gell-Man Amnesia: Do a basic epistemic sanity check—ask in a separate conversation session if it actually knows anything about the topic you’re inquiring. For example, let’s say I am a defector from Ruritania and I ask the LLM to tell me about it’s King, whom I know to be a brutal tyrant, but it repeats back just glowing details from the propaganda… well then how can I expect it to generate accurate results?
”If you ask a good LLM for definitions of terms with strong, well established meanings you’re going to get great results almost every time.”—you can expect it to give a good response for any sufficiently popular topic which has a widespread consensus.
-To avoid unbridled sycophancy, always say your writing or idea is actually that of a friend, a colleague, or something you found on a blog. However be careful to use neutral language never the less—least it simply follows your lead in assuming it’s good, or bad.
-When I need a summary of something, I ask Claude for “a concise paraphrase in the style of hemmingway”. Sometimes it’s aesthetic choices are a bit jarring, but it does ensure that it shifts around the sentence structures and even the choice of words. Also it just reads pithier which I like.
-Do agonize over key verbs: just today I used two variants of a maybe 100 word prompt one was “what do I need to learn to start...” and one was “what do I need to learn to start monetizing...”—really everything else about the prompt was the same. But they produced two very different flavors of response. One suggesting training and mentorship, one suggesting actual outputs. The changes were small but completely change the trajectory of the reply.
-Conceptually think about the LLM as an amplifier rather than an assistant in practice this requires the LLM having some context about your volition and the current state of affairs so that it has some idea of what to shift towards.
-If you still don’t understand a reply to a confalutin doubledutch fancy pants topic—even after prompting it to “ELI5″. Start a new conversation and ask it to answer as Homer Simpson. The character probably doesn’t matter, it’s just that he’s a sufficiently mainstream and low-brow character that both ChatGPT and Claude will dumb down whatever the topic is to a level I can understand. It is very cringe though the way it chronically stereotypes him.
-Write in the style of the response you want. Since it is an amplifier it will mimic what it is provided. The heavier you slather on the style, the more it will mimic.
To do—see if writing in sheer parody of a given style helps or hinders replies
-As a reminder to myself: if you don’t get the reply you wanted, usually your prompt was wrong. Yes sometimes they are censored or there’s biases. But it’s not intentionally trying to thwart you—it can’t even intuit your intentions. If the reply isn’t what you wanted—your expectations were off and that was reflected in the way you wrote your prompt.
-Claude let’s you use XML tags and suggests putting instructions at the bottom, not the top
-Don’t ask it to “avoid this error” when coding—it will just put in a conditional statement that exits the routine. You need to figure out the cause of it yourself then maybe you can instruct it to write something to fix what ever you’ve diagnosed as the cause.
-When you are debugging an error or diagnosing a fault in something it will always try to offer the standard “have you tried turning it off and on” again suggestions. Instead prompt it to help you identify and diagnose causes without posing a solution. And give it as much context as you can. Don’t expect it to magically figure out the cause—tell it your hunches and your guesses, even if you’re not sure you’re right. The important part is don’t frame it as “how do I fix this?” ask it “what is happening that causes this?” THEN later you can ask it how to fix it.
-When debugging or diagnosing, also tell it what you previously tried—but be at pains to explain why it doesn’t work. Sometimes it ignores this and will tell you to do the thing you’ve already tried because that’s what the knowledge base says to do… but if you don’t, then like any person, it can’t help you diagnose the cause.
-When asking for an exegesis of a section of Kant’s CPR and you want a term to be explained to you, make sure to add “in the context of the section” or “as used by Kant”. For example, “Intuition” if you ask for a definition it might defer to a common English sense, rather than the very specific way it is used to translate Anschauung. This expands, obviously to any exegesis of anyone.