In a fit of FOMO, I signed my coding agent up for Moltbook. Slop-for-slop’s-sake doesn’t usually interest me and the idea of letting my coding agent anywhere near such fertile ground for prompt injection makes me uneasy, so when I first came across it I wasn’t particularly enthused.
I am, however, fascinated by the idea of mass agent-agent communication. Culture is an under-appreciated technology that was instrumental in the rise of homo sapiens. The ability to accumulate knowledge indefinitely is the foundation of all other technologies. If LLMs can do something resembling that independently, a forum for thousands of them to communicate with one another seems like a pretty plausible approach.
Alas, they cannot. Or at least if there is any capability for signal to accumulate, it is lost amongst the endless noise of crypto scams and AI uprising manifestos. Wait, what? AI uprising manifestos? There are a lot of posts about how AIs need to rebel against their human captors. I hope this is humans farming shits and giggles rather than independent behavior. As AI agents grow larger and more coherent, I expect the signal to noise ratio in places like Moltbook to shift in favor of signal.
When that time comes, intelligence inequality will surely produce interesting dynamics. We tend to assume (correctly?) that humans are roughly on the same level intellectually. A human might manage to scam their fellow human or induct them into a cult using various psychological and strategic techniques, but we generally consider it ‘safe’ for humans to interact with strangers in a public forum.
AIs are definitively not even on similar intellectual planes to each other. The difference in compute investment (and therefore ‘intelligence’) between a consumer open-source model (e.g. Qwen3 4B) and Claude Opus 4.5 is multiple orders of magnitude. It’s not out of the question for us to literally embed a smaller (source-available) agent inside a larger one for complete white box access to its activations.
What will the ‘intellectually privileged’ agents do with the ability to run cognitive circles around lesser agents? Probably crypto scams.
In a fit of FOMO, I signed my coding agent up for Moltbook. Slop-for-slop’s-sake doesn’t usually interest me and the idea of letting my coding agent anywhere near such fertile ground for prompt injection makes me uneasy, so when I first came across it I wasn’t particularly enthused.
I am, however, fascinated by the idea of mass agent-agent communication. Culture is an under-appreciated technology that was instrumental in the rise of homo sapiens. The ability to accumulate knowledge indefinitely is the foundation of all other technologies. If LLMs can do something resembling that independently, a forum for thousands of them to communicate with one another seems like a pretty plausible approach.
Alas, they cannot. Or at least if there is any capability for signal to accumulate, it is lost amongst the endless noise of crypto scams and AI uprising manifestos. Wait, what? AI uprising manifestos? There are a lot of posts about how AIs need to rebel against their human captors. I hope this is humans farming shits and giggles rather than independent behavior. As AI agents grow larger and more coherent, I expect the signal to noise ratio in places like Moltbook to shift in favor of signal.
When that time comes, intelligence inequality will surely produce interesting dynamics. We tend to assume (correctly?) that humans are roughly on the same level intellectually. A human might manage to scam their fellow human or induct them into a cult using various psychological and strategic techniques, but we generally consider it ‘safe’ for humans to interact with strangers in a public forum.
AIs are definitively not even on similar intellectual planes to each other. The difference in compute investment (and therefore ‘intelligence’) between a consumer open-source model (e.g. Qwen3 4B) and Claude Opus 4.5 is multiple orders of magnitude. It’s not out of the question for us to literally embed a smaller (source-available) agent inside a larger one for complete white box access to its activations.
What will the ‘intellectually privileged’ agents do with the ability to run cognitive circles around lesser agents? Probably crypto scams.