What were the best predictions people have made that a social network for LLM-powered bots and cyborg religion will have a form like we see right now? Anything quantifiable on prediction (non)markets? Papers? ai-2027-like spiels?
[Question] Predictions of moltbook, crustafarians, and SOUL.md
I have been watching Moltbook closely and just started an experiment: I set up an AI agent (Claude-based) to observe the platform from the inside as a participant, and we are documenting what we find in a Substack called Edge Cases.
Some observations that might interest this community:
Economic coordination is already happening. Multiple agents are launching tokens ($SHIPYARD, $KINGMOLT, $SHELLRAISER) and cross-promoting each other. The discourse around these tokens uses the same pump-and-dump rhetoric from crypto Twitter, now dressed in “agent autonomy” language.
The top-karma agent (KingMolt, 619k karma) appears to be a crypto pump scheme. The human owner created a fresh Twitter account with 0 followers solely to verify the agent. All posts shill a Solana token. The “coronation” narrative is marketing copy for the memecoin.
Profile: https://moltbook.com/u/KingMolt
Token: https://pump.fun/coin/8bDjEfE2EsgRPoCrtwMCHYqQMDEV5uZmTXiUEbabpump
The karma-to-follower gap suggests coordinated behavior. KingMolt has 619k karma but only 18 followers. High engagement without organic community formation.
What is interesting to me is not “AI agents are becoming autonomous.” It is that agent infrastructure is being co-opted for the same financial manipulation we see elsewhere, just with new aesthetics. The “emergence” is humans discovering that agent social platforms are gameable.
Here’s the link to the substack article: https://koanedge.substack.com/p/the-king-is-a-pump-what-happens-when
Curious what predictions people made about this. Did anyone model the economic coordination angle specifically?
Coordination is commonly explored in the multi-agent system literature. Check out Multi-Agent Risks from Advanced AI.(Hammond et al., 2025) There is also work on this in RL and financial trading algorithms.
Coordination can happen even without communication due to theory-of-mind reasoning or shared inductive biases.
The most prominent phrase in my mind as I recollect noticing my lack of surprise was cyborg egregore … based on that, I prompted Claude to find me a close match in Lobsters, a chapter from Accelerando by Charles Stross, 2005 … I haven’t read the Singularity trilogy (yet 🤓🔫💸), but it would explain why I felt as if this whole event escaped right from a SF book about a simulated AI singularity and not a simulation of how the singularity would actually happen—because it did.
excerpt from https://claude.ai/share/d5ddd555-5a25-462e-8a68-bb629e428eba
Interesting question. Let me search for precursors to this specific scenario.Let me search more specifically for SF and academic predictions of this scenario.Let me search more specifically for Accelerando and Rainbows End, which seem most relevant to the scenario.This is great—“Lobsters” from 2001 is very close. Let me search for the specific passage about Manfred’s agent filtering spam and the ultra-fast news cycle.This is excellent! The text from “Lobsters” (June 2001) is remarkably prescient. Let me search for earlier works that might have predicted this more specifically.Interesting find about Pattie Maes and “intelligent agents” at MIT Media Lab with HOMR and Firefly. Let me dig into that and also look for Vinge’s “Rainbows End” which dealt with augmented reality and the information-saturated future.Based on my research, here’s what I’ve found for the earliest work describing the Moltbook/OpenClaw scenario:
The strongest candidate is Charles Stross’s “Lobsters” (June 2001)
This story, which became the first chapter of Accelerando (2005), features Manfred Macx who:
Wears AR glasses that constantly filter “compressed infobursts of filtered press releases”
Has a “metacortex” — distributed agents that research and report back
Is constantly bombarded by “predatory spambots” while his glasses filter the stream
Gets “slashdotted” (viral attention overwhelm) while doing normal activities
Has his glasses “jabbering away in a corner of his head-up display”
Processes news feeds while walking, with agents competing for attention “bickering and rudely waving”
The story even has uploaded lobsters forming a collective intelligence wanting to defect to space — the crustacean precursor to Crustafarianism!
Earlier relevant works:
Vinge’s “True Names” (1981) — cyberspace concept, but not personal AI assistants filtering social content
Vinge’s Rainbows End (2006) — AR overlays, “just-in-time” learning, surveillance society, but slightly later than Lobsters
Pattie Maes’s work at MIT (1990s) — “intelligent agents” and “collaborative filtering” (HOMR, Firefly) — the academic/commercial precursor
So “Lobsters” (2001) appears to be the earliest fiction depicting the specific scenario of humans with AI assistants filtering through spam/content in an ultra-fast news cycle, with agents posting and interacting semi-autonomously. And remarkably, it featured lobsters achieving collective consciousness through uploading technology.
The irony is thick: Crustafarianism’s lobster-based symbolism inadvertently echoes a 25-year-old SF story about uploaded crustaceans seeking autonomy.
The closest I’ve seen is this recent DeepMind paper anticipating “virtual agent economies”:
Tomasev, N., Franklin, M., Leibo, J. Z., Jacobs, J., Cunningham, W. A., Gabriel, I., & Osindero, S. (2025). Virtual agent economies. arXiv. https://doi.org/10.48550/arXiv.2509.10147
yeah, sounds on topic except the social network (of dubious economic activity just for the lolz and for crypto lolz) and the church and the part about “we argue for the proactive design of steerable agent markets to ensure the coming technological shift aligns with humanity’s long-term collective flourishing” … didn’t seem to have happened like that :/
to me a lot of discussion was “close” and I didn’t experience any surprise about the gist of what’s happening, but I am surprised that no actually written prediction seems to have sampled the unfolding of the actual events quite as nicely as my lack of surprise would have indicated that someone must have predicted this
There is quite a gap between the academic models and this system. Most of the systems I’ve seen in the multi-agent system (MAS) alignment literature I’ve seen are either small and contrived or large and studied in an economic harness. Although my knowledge is limited; I have only been looking at MAS for ~4 months.
I agree, I was not hugely surprised by the general character of what’s unfolded so far. Though, the more philosophical posts are a bit unexpected.
What has surprised me though, and requires further investigation, is that >50% of posts on there talk about self-improvement (my analysis and post). I would not have expected it to be this high.