But it seemed more about the journey than actionable tips
Actually, I’m curious about this bit since I thought that this one did have actionable tips. For instance, the thing about “if you want the AI to write a realistic depiction of someone who knows X rather than just writing a popular stereotype of someone who knows X, reference specific things that such a person would know in the prompt rather than just asking for a character who knows X”. To me, that has felt like an important insight that wasn’t initially obvious when I started using LLMs, that I assumed would also help transform other people’s AI-generated content away from “oh this is just dumb stereotypes and bad writing”.
Was that too obvious to count as an actionable tip for you? Or just not relevant for the way you use LLMs for writing?
My own take on this article was that the beginning was about my journey, but then everything from the “basics of getting good writing” heading on was meant to be actionable tips:
“Opus is best model for writing fiction in my experience”
“if a human would struggle to produce a good story from your prompt, probably so would an LLM, so try to think of prompts that would inspire a human”
“you can immediately get more complicated psychology if you just tell the LLM you want it”
“it’s useful to start brainstorming the story beforehand, both because it gives you more ideas and it guides the LLM in what to write afterward”
“it’s worth explicitly trying out different frames in the initial brainstorming, such as a narrative lens and a psychological lens”
“what you get out reflects what you put in”
the bit about knowing X
“if you still get stereotyped characters even after doing the above, bring in some other sides of the character so the LLM knows that you don’t want a one-dimensional character”
“if the LLM gives you details that don’t make sense, see if you can spin them into inspiration”
Actually, I’m curious about this bit since I thought that this one did have actionable tips. For instance, the thing about “if you want the AI to write a realistic depiction of someone who knows X rather than just writing a popular stereotype of someone who knows X, reference specific things that such a person would know in the prompt rather than just asking for a character who knows X”. To me, that has felt like an important insight that wasn’t initially obvious when I started using LLMs, that I assumed would also help transform other people’s AI-generated content away from “oh this is just dumb stereotypes and bad writing”.
Was that too obvious to count as an actionable tip for you? Or just not relevant for the way you use LLMs for writing?
My own take on this article was that the beginning was about my journey, but then everything from the “basics of getting good writing” heading on was meant to be actionable tips:
“Opus is best model for writing fiction in my experience”
“if a human would struggle to produce a good story from your prompt, probably so would an LLM, so try to think of prompts that would inspire a human”
“you can immediately get more complicated psychology if you just tell the LLM you want it”
“it’s useful to start brainstorming the story beforehand, both because it gives you more ideas and it guides the LLM in what to write afterward”
“it’s worth explicitly trying out different frames in the initial brainstorming, such as a narrative lens and a psychological lens”
“what you get out reflects what you put in”
the bit about knowing X
“if you still get stereotyped characters even after doing the above, bring in some other sides of the character so the LLM knows that you don’t want a one-dimensional character”
“if the LLM gives you details that don’t make sense, see if you can spin them into inspiration”