I started with something more “contained” and easier to manage because actual users will go off script every chance they get, and this is basically like playing chess against yourself while reading a book on how to play chess. But, I may have found a kind of working compromise in terms of format and what needs to be captured. Will need a few days to see how it holds up, but right now, this is the basic idea:
Initial PROMPT to get the story started, followed by THOUGHTS that examine them from a gaming perspective, an ACTION, my THOUGHTS, another PROMPT, and.. this is where I was having a tough time because some of the mechanics were not being captured in the THOUGHTS prior. It was only as I wrote the PROMPT that I figured-out certain details or actions that needed to be in play. So when I write a PROMPT that contains these other elements, I write a LOGIC section below them to explain why I “prompted” the way I did.
In crafting the story as you go, the PROMPT is also part of the THOUGHT process! I’m sure anyone giving this a try will be writing and re-writing their prompt as part of the process. Having this extra LOGIC step seems to clean that up, but I don’t think any ML algo will ever keep track of story elements, have ideas on where to take the story next, and then backtrack. Perhaps the “prompt” is some adversarial output from the thoughts, but still internal to process, leading to more thoughts (aka the logic), which leads to the actual output.
Just my 2 cents.
I’m just “importing” my twitter thread and adding some additional thoughts.
If some model could spit out 100 of these annotated adventures, then the challenge would have already been solved.
Not sure about that 300,000 word count document idea though… A word-dump focused “result” plays into the strength of LLM’s while providing none of the structure that is missing.
The more I work on this, the more I think you want something different. Perhaps use existing choose your own adventure books as a starting point, and work on deconstructing them; expanding on all of the reasoning, mechanics, story elements, etc.
The example given is heavy with exposition, and no real mechanics. That seems to rule-out any desire for explicit replies to a prompt (implication that player goes thru door is enough, not needing “walk thru door”).
I get that an algo doesn’t care, but example is hard to parse. It fails as an adventure (very on-rails) but also like having director commentary track play over a movie you’ve never seen, and then get tested on dialog and plot points.
The “thoughts” related to the 4 page sample just look like answers to multiple choice questions about the body of text. This says nothing about the process of crafting the narrative, which is the point, right? Examples of how to craft story structure? Why something was done?
There is a kind of “other minds” problem, in that the story should be constructed with player expectations in mind. Rather than just generating copious amounts of “story text,” the adventure is more of a dialog where the DM moves the player thru a story, but also “entertains” with traps and dead-ends. What will happen next feels like ground that is already covered by LLM’s, but anticipation of actions is where the dynamic feel comes from (so at the very least, an algo needs to create branching story structure).
30M word dataset’s wont do anything to “train creativity” into the system, such as understanding why a small white rabbit isn’t a real threat.. until it fly’s at your neck.
Edit: Would it not just be easier to craft a framework since all of the questions/considerations required when building a story are going to be the same regardless of player inputs? I’m going to continue-on with the “adventure” track I’ve already started since the end of act annotations still explain the reasoning, and help point toward future story elements. There is no pre-planned arc, so there is the same level of “real-time” construction as the game progresses. Really not clear how annotating a few copies of War and Peace is useful while also having to write such a story. As stated, after 12k-15k words, you would have discovered a framework that works for the next 15M words.