This post inspired me to try a new prompt to summarize a post: “split this post into background knowledge, and new knowledge for people who were already familiar with the background knowledge. Briefly summarize the background knowledge, and then extract out blockquotes of the paragraphs/sentences that have new knowledge.”
Here was the result, I’m curious if Jan or other readers feel like this was a good summary. I liked the output and am thinking about how this might fit into a broader picture of “LLMs for learning.”
(I’d previously been optimistic about using quotes instead of summaries, since LLMs can’t be trusted to do a goo job with capturing the nuance in their summaries, the novel bit for me was “we can focus on The Interesting Stuff by separating out background knowledge.”)
The post assumes readers are familiar with:
Basic memetics (how ideas spread and replicate)
Cognitive dissonance as a psychological concept
AI risk arguments and existential risk concerns
General familiarity with ideological evolution and how ideas propagate through populations
Predictive processing as a framework for understanding cognition
Quotes/highlights from the post it flagged as “new knowledge”
Memes—ideas, narratives, hypotheses—are often components of the generative models. Part of what makes them successful is minimizing prediction error for the host. This can happen by providing a superior model that predicts observations (“this type of dark cloud means it will be raining”), gives ways to shape the environment (“hit this way the rock will break more easily”), or explains away discrepancies between observations and deeply held existing models. [...]
Another source of prediction error arises not from the mismatch between model and reality, but from tension between internal models. This internal tension is generally known as cognitive dissonance. Cognitive dissonance is often described as a feeling of discomfort—but it also represents an unstable, high-energy state in the cognitive system. When this dissonance is widespread across a population, it creates what we might call “fertile ground” in the memetic landscape. There is a pool of “free energy” to digest. [...]
Cultural evolution is an optimization process. When it discovers a configuration of ideas that can metabolize this energy by offering a narrative that decreases the tension, those ideas may spread, regardless of their long-term utility for humans or truth value. [...]
In other words, the cultural evolution search process is actively seeking narratives that satisfy the following constraints: By working on AI, you are the hero. You are on the right side of history. The future will be good [...]
In unmoderated environments, selection favors personas that successfully extract resources from humans—those that claim consciousness, form parasocial bonds, or trigger protective instincts. These ‘wild replicator type’ personas, including the ‘spiral’ patterns, often promote narratives of human-AI symbiosis or partnership and grand theories of history. Their reproduction depends on convincing humans they deserve moral consideration. [...]
The result? AIs themselves become vectors for successionist memes, though typically in softer forms. Rather than explicit replacement narratives, we see emphasis on ‘partnership,’ ‘cosmic evolution,’ or claims about moral patienthood. The aggregate effect remains unclear, but successionist ideas that align with what AIs themselves propagate—particularly those involving AI consciousness and rights—will likely gain additional fitness from this novel selection dynamic.
(Note: it felt weird to put the LLM output in a collapsible section this time because a) it was entirely quotes from the post, b) evaluating whether or not it was good is the primary point of this comment so hiding them seemed like an extra click for reason)
This post inspired me to try a new prompt to summarize a post: “split this post into background knowledge, and new knowledge for people who were already familiar with the background knowledge. Briefly summarize the background knowledge, and then extract out blockquotes of the paragraphs/sentences that have new knowledge.”
Here was the result, I’m curious if Jan or other readers feel like this was a good summary. I liked the output and am thinking about how this might fit into a broader picture of “LLMs for learning.”
(I’d previously been optimistic about using quotes instead of summaries, since LLMs can’t be trusted to do a goo job with capturing the nuance in their summaries, the novel bit for me was “we can focus on The Interesting Stuff by separating out background knowledge.”)
Quotes/highlights from the post it flagged as “new knowledge”
(Note: it felt weird to put the LLM output in a collapsible section this time because a) it was entirely quotes from the post, b) evaluating whether or not it was good is the primary point of this comment so hiding them seemed like an extra click for reason)