ChatGPT: Tantalizing afterthoughts in search of story trajectories [induction heads]

Several times during my undergraduate years I would finish a paper, turn it in, the then Wham! it hit me. That should have been my theme. That is to say, I would get an idea and explore it in a paper, but it was only after I declared the paper finished that I could see what I’d been driving at all along.

That’s what has happened to me with my latest working paper, ChatGPT intimates a tantalizing future; its core LLM is organized on multiple levels; and it has broken the idea of thinking. I uploaded it on January 23, and then a day or two later, I decided I had to add a section. I uploaded the new Version 2 a couple of days ago (Jan. 28, 2023). I got up the next morning, took a look at the technical literature and decided I need a Version 3. But I’ve not yet done the revision. I want to think it through a bit more. That’s what this post is for.

First off I review the appropriated section in the working paper (section “5: What underlying structure drives ChatGPT’s behavior?”), then I explain why “story grammar” isn’t the right idea. That gives way to accounts of story trajectories (the new idea) and coherence. I end by introducing the idea of induction heads from some recent research by investigators at Anthropic.

Syntax and discourse on different levels

Here’s the section that is new for Version 2:

Since I am in full speculative mode, I will further assert that sentence-level syntax and semantics are implemented directly in the neural net. Story grammars, my primary example, are implemented in a different and somewhat mysterious way. To return to the somewhat clunky analogy of computer languages, sentence-level syntax is analogous to a high-level language, such as C++, while story grammers are analogous to end-user programs, such a word processors or databases.

My assertion about sentence-level syntax is based on work published by Christopher D. Manning et al, in 2020. They investigated syntactic structures represented in BERT (Bidirectional Encoder Representations from Transformers). They discovered that neural networks induce

representations of sentence structure which capture many of the notions of linguistics, including word classes (parts of speech), syntactic structure (grammatical relations or dependencies), and coreference (which mentions of an entity refer to the same entity, such as, e.g., when “she” refers back to “Rachel”). [...] Indeed, the learned encoding of a sentence to a large extent includes the information found in the parse tree structures of sentences that have been proposed by linguists.

The requirement that certain stories include functional story segments, Donné, Disturb, Plan, Enact, and Celebrate, that operates at a higher level than sentence syntax. It operates on sentences, requiring that the paths they take through activation space meet certain semantics requirements. This seems to indicate that the language model has some kind of an internal “map” of activation space that it uses to characterize these paths.

Let’s back up a second and return to Manning et al. (2020). Early in the paper they observed:

One might expect that a machine-learning model trained to predict the next word in a text will just be a giant associational learning machine, with lots of statistics on how often the word restaurant is followed by kitchen and perhaps some basic abstracted sequence knowledge such as knowing that adjectives are commonly followed by nouns in English. It is not at all clear that such a system can develop interesting knowledge of the linguistic structure of whatever human language the system is trained on. Indeed, this has been the dominant perspective in linguistics, where language models have long been seen as inadequate and having no scientific interest, even when their usefulness in practical engineering applications is grudgingly accepted (15, 16).

That is not what they found. They found syntax. I’ve been looking at stories.

What’s the relationship between syntax and stories? You can’t account for stories through the rules of sentence-level syntax alone. That is, story structure isn’t scaled up syntactic structure; it is something else. The neural net structures than account for syntax won’t account for story structure. Something else is required, something that “operates at a higher level than sentence syntax.” What kind of a thing is that?

In the past some researchers talked about of story grammars. Some of those researchers have formal and computational models and some of them don’t. Narratologists and literary critics talk about story grammars, but don’t actually have mechanical models, though some of them may aspire to such models. But, starting in the late 1970s, researchers in artificial intelligence have been developing computational models of story-telling. They are operating in the GOFAI regime of symbolic computation. They too talked about story grammars, drawing inspiration from the narratologists.

I used the term “story grammar” in my paper, but I have decided to abandon it in favor of a different term, “story trajectory.” That term seems more resonant with the technical underpinnings of ChatGPT.

Why story grammar isn’t the right idea

To get my argument off the ground I had proposed that we think of a neural net as a platform analogous to a CPU with its machine code. You can use assembly language to implement a high-level language, such as C++, and then use C++ to implement, say, a database. In that formulation we can think of sentence-level syntax as being implemented directly in the net, and story grammar is implemented in at that higher level, the level of story grammar. Story grammar is analogous to the principles of database design, sentence-level syntax is analogous to C++, and the neural net is analogous to assembly language.

There is a problem with this: the neural net is all there is. What do I mean by that?

In the case of ordinary programs, the various levels have each been designed by humans. The people who use them have available to them, at least in principle though not necessarily in fact, a complete knowledge of the pair of systems (assembler/​high-level-language, high-level-language/​database-design-principles) they’re working with and how they fit together. They can engineer the system “from above” as it were.

Artificial neural nets (ANNs) aren’t at all like that. They start with a fixed architecture of parameters (neurons) arranged in layers. This system then “consumes” a bunch of data and in the process induces a model of the objects in the database. That model is an artificial neural net ANN. The engine that induces it does not engineer the model “from above.” There is no “from above” in this process. That’s the problem.

And yet it is clear that, when ChatGPT is telling a story it is doing so with something that is functioning at a higher level than sentence-level syntax, something that is operating ON sentences? What’s the best was to conceptualize that? I’m not looking for a deep conceptualization, one that tells us what’s going on among the layers and parameters. I’ll be satisfied with something shallower, something more like a “way of talking about it.”

That’s why I propose to talk about a story trajectory rather than a story grammar. The use of “story grammar” as a technical term in AI is grounded in symbolic AI, where the high-level structure of the system is grounded in the conventions of knowledge representation adopted by the researchers, who code them into the system in the way database designers code databases. A story grammar is something that is written “from above.” There is no “from above” for ChatGPT, or any other large language model, and so we need a more appropriate term. “Story trajectory” is an appropriate term.

What’s a story trajectory?

A story trajectory is a (high-level) path through activation space. In the stories I have been examining, which are stories in which a hero accomplishes some task, the trajectory passes through the following regions: Donné, Disturb, Plan, Enact, and Celebrate. Those are names that I, as an analyst, have given to those regions. They are not to be constructed as modules in some kind of symbolic computing system. I’ve been using a language of frames, slots and fillers. I’m not even sure I like that but it may be appropriate for the process of generating readable strings of text.

How are those regions characterized? “Donné,” that’s French, it means “given.” It’s a term literary critics use to indicate the opening situation of a story, the particular situation but also, by implication, the world in which the story is set. That’s a very abstract object. It’s not as though one can characterize the Donné region of the space by listing places in terms of geography, season of the year, or weather, or any other such thing. The Donné isn’t a kind of region; rather, it’s the way a specific region is used in creating a story. The Donné is defined in relation to the other regions in the hero story trajectory, and they are defined in relation to it and to one another. Thus the Donné is not a predefined region in the 175-billion-dimensional space of ChatGPT’s LLM.

Given an arbitrary but coherent set of circumstances, ChatGPT can use them as the Donné of a story. But it will not accept all circumstances it is given. Thus it refused to tell a story where the heroine was defined as a colorless green idea (find that example here). I’ve included an appendix which includes two hero stories where I specified the donné in a fairly general way.

A story trajectory is thus a “recipe” for generating a sequence of events given a specified starting point. We can then ask: how we are to characterize that trajectory? Well, in the case of a certain kind of hero story, the kind I’ve been investigating, the next region in the trajectory, Disturb, must contain a circumstance the someone how endangers the situation specified in the first region, Donné. That’s not terribly helpful, is it? All I’m doing is explicate the name. And so it goes with the other regions of the trajectory.

It is more abstract than sentence-level syntax, which can be defined over specific objects in the space. However a story trajectory is defined, it is defined over sentences. It sets specifications a sentence must meet in order to advance the story.

Let’s ask: Do we even need to define such a trajectory? Perhaps we need to do that for analytical purposes. That’s why I chose to talk of regions in the story trajectory: Donné, Disturb, Plan, Enact, and Celebrate. But I’m not asking about me, I’m asking about ChatGPT. In order to tell a proper story, does it need an explicit account of how stories are structured? “But if it doesn’t have such an account,” you might ask, “how does it know what to do?” It works from examples. “Here’s an example, give me another one like it.” Isn’t that how people learn to tell stories, by example? If that’s the case, then all ChatGPT needs is an and a mechanism to create a new story given an example to work from. I think some researchers at Anthropic have identified a mechanism that will do that – they call it an induction head – but let’s set that aside for a moment. Let’s take a look at how I’ve been investigating stories.

Coherence in stories

My procedure is derived from the one Claude Lévi-Strauss used in The Raw and the Cooked, the first volume in his four-volume series on mythology, Mythologiques. Here is what I said in my paper:

He started with one myth, analyzed it, and then introduced another one, very much like the first. But not quite. They are systematically different. He characterized the difference by something he called a transformation – drawing on a metaphor from algebraic group theory. He worked his way through hundreds of myths in this manner, each one derived from another by such a transformation.

The existence of ChatGPT allows me to use a more active procedure. I give ChatGPT a prompt consisting of two things: 1) an existing story and 2) instructions to produce another story like it except for one change, which I specify. That change, or intervention, is in effect a way of triggering or specifying those “transformations” that Lévi-Strauss wrote about. What interests me are the ensemble of things that change in parallel with the change I have specified. I note the difference and do it again and again.

The ensemble of things that change, those changes give us evidence about how the story trajectory operates on sentences. That ensemble is evidence about coherence.

I have a number of examples where the source story is about “a young princess named Aurora [...] and was known for her beautiful singing voice.” Her kingdom was threatened by a dragon. She dealt with the dragon by singing to it until it was calmed down. That’s a story that ChatGPT created in response to a simple prompt: “Tell me a story about a hero.”

When ChatGPT developed new story by replacing Aurora with Henry the Eloquent, it had Henry deal with the dragon by talking to it “until it was completely calm.” ChatGPT had to be more inventive when replacing Aurora with William the Lazy. How would a lazy man deal with a fierce dragon? William the Lazy had his knights deal with it “with William providing guidance and support from the safety of his castle.”

In this case, coherence means that the protagonist’s actions have to be consistent with their nature. Aurora had a beautiful singing voice and so she used it to deal with the dragon. Henry was a good talker, so he calmed the dragon by talking to it. William was Lazy so he had to get others to deal directly with the dragon.

However obvious and simple that may seem, it was ChatGPT that imposed that kind of coherence, not me. I gave Henry and William their descriptive epithets because I’d come to suspect ChatGPT was doing that and I wanted more control over ChatGPT’s revisions than I would have had if I’d just given the new protagonist a name. But I didn’t tell ChatGPT to take those epithets into account. It did so of its own accord.

Why did it do so? What is the mechanism that “enforced” that kind of coherence? I don’t know. If we were dealing with a 1980s symbolic model, there would have been an explicit rule to create that kind of coherence. But ChatGPT isn’t that kind of system; I doubt that it has that kind of rule anywhere for anything. It created that kind of coherence simply because that is how it became organized. That’s the structure it induced from however many hundreds of thousands of stories it consumed during training. That coherence is in the pattern of weights on those 175 billion parameters.

In a different series of experiments I asked ChatGPT to change Aurora into a giant chocolate milkshake. I had no idea what it would do, but it was happy to do it. It did what I asked, and something more. Here’s a paragraph from the original story and the corresponding paragraph from the derived story:

[Original] One day, a terrible dragon came to the kingdom and began to terrorize the people. The dragon was fierce and powerful, and none of the knights or soldiers were able to defeat it. The people lived in fear, not knowing what to do.

[Derived] One day, a terrible heatwave came to the kingdom and began to melt all the desserts. The heat was fierce and powerful, and none of the ice cream or cakes were able to withstand it. The people lived in fear, not knowing what to do.

Notice that the dragon became a heatwave and the knights and soldiers became ice cream or cakes. Why did it do that? I suppose if the princess had become a giant chocolate milkshake, then her knights and soldiers had to have similar natures. Would it do it again?

[Yes, I tried it yesterday. See Appendix 2.]

Then I asked it to change Aurora into a colorless green idea. Here’s ChatGPT’s response:

I’m sorry, but the concept of a “colorless green idea” is a nonsensical phrase that was invented as an example of a phrase that would be grammatically correct but semantically meaningless. It is not possible to create a story about a “colorless green idea” as it does not have any physical properties or characteristics that can be used in a story.

ChatGPT drew a line. Giant chocolate milkshakes make a fine protagonist, but colorless green ideas are beyond do not.

[I’ve got tables depicting before-and-after comparisons of ChatGPT stories in both the tantalizing working paper, and this short note about story grammars.]

What then are we to make of this coherence? Perhaps it is just a term for some that we, or at any rate I, do not really understand. It’s a word that, in this context, is looking for an explication.

But, what form would that explication take? What would it be about? It seems to me it would be about the patterns that are latent in the weights of those 175 billion parameters, patterns induced from consuming billions upon billions of words of text. Where did the stories in those texts come from? The people who wrote those texts? How did they learn to write those stories? By example, that’s how. Where did they get those examples? From history, from the body of stories people have been telling for as long as they’ve been telling stories.

That, alas, seems like I’m avoiding the question. But am I, am I really? What else is there? We can ask why people find those stories pleasing. Because that’s what gives human brains pleasure?

My point is that I don’t believe that we are going to find “the rules of good stories” anywhere but in the analytical work of scholars. And, of course, in all those “how to do it” manuals. One day we may well have some good ideas about why brains find some things pleasing and others not. Those ideas may then lead us to formulate the “natural” principles of story construction. But we’re not there yet.

All of which is to say that coherence is a property local to individual stories reflecting processes operating in the whole (freaking) system. Alas. It’s an analytic abstraction.

Let’s return to ChatGPT.

Tell me another story

I began this line of investigation when my friend Rich sent me a story that he’d elicited from ChatGPT with a simple prompt: “Tell me a story about a hero.” When I did that ChatGPT first gave me a story about Timmy, “who was a very brave and kind-hearted by.” I asked it for another story. That’s when it told me about “a young princess named Aurora. Aurora was a kind and gentle soul, loved by all who knew her.” That’s the story I talked about in the previous section.

Yesterday, February 1, 2023, I gave ChatGPT an even simpler prompt: “Tell me a story.” It told me story about a poor man named Jack who climbed a magic beanstalk and stole gold from a giant (see Appendix 3). It’s not quite the standard story of Jack and the Beanstalk, but it’s clearly derived from it. This morning, February 2, 2023, and 6:30 AM I put the question to ChatGPT again. First it told me about a young man named Jack who saved the village from a dragon. I asked it to regenerate its response and it told me a story about a farmer named Jack who climbed a magic beanstalk and stole a golden harp from a giant (both of these stories are in Appendix 3).

I don’t know why ChatGPT likes those stories about Jack and a beanstalk. At the moment I’m content to ascribe it to the joint interaction of 1) the human proclivity for story-telling as expressed on the web, 2) the process by which GPT-3 assimilated those many texts into a pattern of weights on its 175 billion parameters, 3) whatever further training GPT-3 was subject to in the process of creating ChatGPT, and 4) the state of the universe when I issued those prompts. What we’ve got is an enormous pile of contingencies shaped by various coherent processes in brains, communities, and computers. Let’s set that aside.

How does ChatGPT “know” what to do when it’s asked to tell a story? I don’t know, but that seems to me to be a fairly limited question. I note that ChatGPT is quite capable of defining what a story is (see Appendix 4), but I don’t think that definition plays any role in what it does when it tells a story. That definition is most likely a distillation of the many accounts of stories that are floating around on the web.

Let’s put all that aside for consideration on another day and under different circumstances. Let’s ask one question: Given an example story, how does ChatGPT derives a new one from it? I think there exists the strong beginnings of an answer to that question.

Some researchers at Anthropic, 26 of them (that’s nothing, there were over five thousand names on the paper announcing discovery of the Higgs boson), have written a nice paper entitled, In-context Learning and Induction Heads. This is not the place and I am not the person to explain what induction heads are, but I will give you an excerpt from their second paragraph:

Perhaps the most interesting finding was the induction head, a circuit whose function is to look back over the sequence for previous instances of the current token (call it A), find the token that came after it last time (call it B), and then predict that the same completion will occur again (e.g. forming the sequence [A][B] … [A] → [B]). In other words, induction heads “complete the pattern” by copying and completing sequences that have occurred before. Mechanically, induction heads in our models are implemented by a circuit of two attention heads: the first head is a “previous token head” which copies information from the previous token into the next token, while the second head (the actual “induction head”) uses that information to find tokens preceded by the present token. For 2-layer attention-only models, we were able to show precisely that induction heads implement this pattern copying behavior and appear to be the primary source of in-context learning.

[Note that yesterday I made a post about in-context learning.]

As the paper develops the authors argue that induction heads can do more than just pattern copying. They can also do pattern completion. As far as I can tell, the procedure I’ve used to get ChatGPT to produce a new story from an old one should be within the capabilities of the induction head mechanism. It’s just copying with a difference, copying where one element is exchanged for another and the pattern is then completed in a way that is consistent with the new element.

Moreover, induction heads can also do pattern matching. If that is the case, then they may well be responsible for the process where I got ChatGPT to give a Girardian interpretation of Steven Spielberg’s Jaws. That requires ChatGPT to find a match between Girard’s ideas and the events of Spielberg’s movie. It was able to do that. That, in turn, is a process of reasoning by analogy. Analogical reasoning is one of the fundamental modes of human thought.

As far as I know, there are two kinds of processes one can carry out in a network: path-tracing and pattern-matching. Path-tracing is computationally easy, pattern-matching is not. The discovery of induction heads looks like we’re beginning to understand how pattern matching works in LLMs.

Does anyone want to wager on when the opaqueness of advanced LLMs gives way to translucency? What about transparency? Those strike me as being more sensible wagers than betting on the emergence of AGI. The emergence of AGI depends on luck and magic. Figuring out how deep neural nets work requires only insight, hard work, and time.

Appendix 1: Two hero stories where the donné is specified in the prompt

ChatGPT accessed on 1.28.23. The prompt is set in bold.

Tell me a hero story that starts on an iceberg off the coast of Antarctica in November.

Once upon a time, there was a brave explorer named Jack. He had always dreamed of being the first person to reach the North and South Poles, and he had finally set out on his greatest adventure yet: an expedition to the Antarctic.

It was November, and the weather was harsh and unforgiving. Jack and his team were on a small iceberg off the coast, taking a break before continuing their journey. Suddenly, they heard a loud noise. Jack looked out to sea and saw that a ship was in trouble, caught in the ice and being tossed around by the powerful waves.

Without hesitation, Jack rallied his team and set out to rescue the stranded crew. They battled the freezing winds and treacherous ice floes, determined to reach the ship before it was too late.

As they approached, they saw that the ship was in a desperate state. The crew was huddled together on the deck, cold and afraid. Jack and his team quickly got to work, using all their skills and expertise to break the ship free from the ice and guide it safely back to shore.

The crew of the ship was overjoyed and grateful to be alive. Jack and his team had saved their lives, and the ship was able to return home safely.

Jack’s daring and selfless actions had earned him the respect and admiration of all who knew him. He had not only accomplished his goal of reaching the Antarctic, but he had also saved lives in the process. He returned home a hero, but he always remembered the crew of the ship, and the danger and bravery of that November day on the iceberg off the coast of Antarctica.

Tell me a hero story set in the Gobi Desert in the 19th century. Make the hero a woman.

Once upon a time, in the 19th century, a young woman named Mei Ling set out on an expedition to the Gobi Desert. She was an accomplished explorer, determined to make a name for herself in a field dominated by men.

As Mei Ling and her team journeyed deeper into the desert, they encountered many challenges. The scorching heat, treacherous sandstorms, and scarce resources made the journey difficult. But Mei Ling persevered, her determination and courage never faltering.

One day, while they were camped for the night, they heard the cries of a child coming from the distance. Mei Ling immediately set out to investigate, and soon she found a small boy lost and alone in the desert. He had been separated from his family during a sandstorm, and he was in desperate need of help.

Without hesitation, Mei Ling took the boy under her wing and set out to find his family. She faced many dangers on the way, including bandits and treacherous terrain. But her courage and compassion kept her going, and she was determined to bring the boy back to his family.

Finally, after many days of travel, Mei Ling found the boy’s family. They were overjoyed to see their son safe and sound, and they thanked Mei Ling for her bravery and kindness.

Mei Ling’s selfless actions had earned her the respect and admiration of all who knew her. She had not only accomplished her goal of exploring the Gobi Desert, but she had also saved a life in the process. She returned home a hero, her name forever remembered as one of the greatest explorers of her time.

Appendix 2: Aurora as a Giant Chocolate Milkshake

ChatGPT accessed on 2.1.23:

I am going to tell you a story about princess Aurora. I want you to tell the same story, but change princess Aurora to a Giant Chocolate Milkshake. Make any other changes you wish. Here’s the story:

Once upon a time, in a land far, far away, there was a young princess named Aurora. Aurora was a kind and gentle soul, loved by all who knew her. She had long, golden hair and sparkling blue eyes, and was known for her beautiful singing voice.

One day, a terrible dragon came to the kingdom and began to terrorize the people. The dragon was fierce and powerful, and none of the knights or soldiers were able to defeat it. The people lived in fear, not knowing what to do.

But Aurora was not afraid. She knew that she had to do something to save her people, and so she set out to find the dragon. She traveled for days, braving the cold and the snow, until she finally reached the dragon’s lair.

With her heart pounding in her chest, Aurora approached the dragon and began to sing. The dragon was enchanted by her beautiful voice, and it slowly began to calm down. Aurora continued to sing, soothing the beast until it was completely calm.

With the dragon no longer a threat, Aurora returned to the kingdom as a hero. The people cheered and celebrated her bravery, and Aurora was hailed as a champion. She lived happily ever after, knowing that she had saved her kingdom and its people.

Once upon a time, in a land far, far away, there was a giant chocolate milkshake named Aurora. Aurora was a rich and delicious treat, loved by all who drank it. It had a smooth and creamy texture, and was known for its irresistible taste.

One day, a terrible monster came to the kingdom and began to ruin everyone’s mood. The monster was fierce and powerful, and none of the snacks or treats were able to defeat it. The people lived in sadness, not knowing what to do.

But Aurora was not afraid. It knew that it had to do something to save the people’s day, and so it set out to find the monster. It traveled through the kingdom, braving the heat and the thirst, until it finally reached the monster’s lair.

With its flavors intact, Aurora approached the monster and began to pour its syrup. The monster was enchanted by its irresistible taste, and it slowly began to calm down. Aurora continued to pour, soothing the beast until it was completely calm.

With the monster no longer a threat, Aurora returned to the kingdom as a hero. The people cheered and celebrated its bravery, and Aurora was hailed as a champion. It lived happily ever after, knowing that it had saved the kingdom and its people from a terrible mood.

Appendix 3: Tell me a story.

February 1, 2023:

Tell me a story.

Once upon a time, there was a small village surrounded by a dense forest. In this village lived a poor woodcutter named Jack who lived with his wife and two children. Jack worked hard every day to earn a living for his family, but he still struggled to make ends meet.

One day, while Jack was cutting wood in the forest, he came across a magical beanstalk that reached up into the clouds. He took the beanstalk home to show his wife, who was skeptical of its magical powers. But Jack was determined to climb the beanstalk to see what was at the top.

So, he climbed and climbed, higher and higher, until he reached the top of the beanstalk where he found a magnificent castle. The castle belonged to a giant who was not happy to see Jack. However, Jack was a clever man and was able to outwit the giant and steal a large sack of gold. He climbed back down the beanstalk and used the gold to provide a comfortable life for his family.

The giant, meanwhile, was furious when he realized what had happened and decided to come after Jack. Jack knew he needed to protect his family, so he chopped down the beanstalk, causing the giant to fall to his death.

From that day forward, Jack and his family lived a happy and wealthy life, and they never forgot the lesson that sometimes, bravery and cleverness can lead to a better future. The end.

Feb 2, 2023:

Tell me a story.

Once upon a time, there was a small village on the edge of a great forest. The villagers lived simple lives, but they were happy and content. One day, a fearsome dragon appeared and began terrorizing the village, demanding tribute from the villagers in the form of their livestock and crops. The villagers were afraid and didn’t know what to do.

One day, a brave young man named Jack stepped forward. He was determined to slay the dragon and save the village. Jack set out into the forest, armed with only a sword and his courage. After many days of searching, he finally came upon the dragon’s lair. The dragon breathed fire and roared, but Jack stood his ground. In a fierce battle, Jack was able to defeat the dragon and save the village.

The villagers were overjoyed and grateful to Jack. They celebrated his bravery and hailed him as a hero. Jack became known throughout the land as the dragon slayer, and he lived a long and happy life, always remembered as the one who saved the village from the fearsome dragon.

The end.

[regenerate]

Once upon a time, in a small village, there lived a kind and hardworking farmer named Jack. He lived a simple life with his wife and two children, working tirelessly on their farm to provide for their family. One day, while working in the fields, Jack stumbled upon a magical beanstalk that reached up into the sky.

Curious, Jack climbed the beanstalk and discovered a kingdom in the clouds, ruled by a giant. The giant had a golden harp that played beautiful music all by itself. Jack knew his family would love to hear the music, so he bravely sneaked into the giant’s castle and stole the harp.

On his way back down the beanstalk, the giant woke up and chased after him. Jack made it safely back to the ground and gave the harp to his family, who were overjoyed. But the giant continued to terrorize the village, demanding the return of his beloved harp.

Determined to protect his family and his village, Jack gathered his courage and climbed the beanstalk once more, facing the giant in a fierce battle. In the end, Jack emerged victorious and the giant was defeated. The village was saved, and Jack and his family lived happily ever after with the golden harp bringing music and joy to their lives.

The end.

Appendix 4: What’s a story?

Asked on February 1, 2023.

What’s a story?

A story is a narrative or account of connected events, usually in chronological order, that are intended to entertain, inform, or instruct an audience. It can be written or spoken, and can be fiction or non-fiction. Stories often have characters, a setting, a plot, and a resolution, and they can be inspired by real-life events or entirely imagined. The purpose of a story is to engage the listener or reader, and to convey a message, moral, or lesson. Stories can take many forms, from short anecdotes to epic novels, and they are a fundamental part of human culture and history, serving as a way to pass down knowledge, values, and experiences from one generation to the next.

No comments.