Also, I asked GPT-5 Pro to judge the text against several others (including “The Scaling Hypothesis” and “Meditations on Moloch”) + infer things about the authors. It judged it is the best one. Lol. I wonder whether it inferred I’m the author (well, the prompter more accurately). Nothing it said it inferred about me is correct except for age, just barely. Or maybe that’s because it was the generator. But it also judged Claude 3.7′s output pretty well when I tried it.
It is about 200kb of AI slop. 0th_md started as me just asking 3.7 Sonnet to write something based on several sources.
write compelling, visionary text originating from intersection of all these ideas here. I want you to first brainstorm heavily, then reason out what is best. Really think deeply before commiting to a draft.
Kaj Sotala: “Multi-agent Models of Mind”—Sequence exploring how the mind can be understood as composed of interacting subagents, with particular relevance to consciousness and attention mechanisms
Thought Network Architecture: Architectural proposal for implementing workspace principles for improving AI reasoning capabilities, drawing on insights about context window limitations and test-time adaptation
“Thought Network Architecture” was the result of chat with Claude, prompt being
reconceptualization of multi-agent systems: instead of thinking about them analogously to separate individuals, think of them as systems of thoughts. Each thought is a single invocation, inference on the model. It might contain all the current features, inside: RL CoT, tool use. Note that by ‘thoughts’ here I mean something else than RL CoT.
So, let’s imagine the whole agent. There’s a context window it has, Then, there are various generators of thoughts, operating on this context window. Maybe this context window is analogous to human consciousness, in the sense of being global workspace, of sorts.
Some kinds of thoughts could be tasked with optimizing content in that context window. Remove stuff that doesn’t seem important, summarize etc. Other ones could perhaps manage long term memory (RAG). They’d take main context window, or part of it, and be tasked with looking for information that seems useful—if they find it, they could insert it into the global workspace. There could be various metacognitions—double checking other work, verifying, perhaps deliberating on a hierarchy of instrumental goals, whether things aren’t veering off course.
Thoughts could spawn further thoughts, based on whatever.
And then I kept iterating on this for way too long. Today, I spent several more having GPT-5 Pro iterate on it, ultimately resulting in 8th_md. I didn’t even read it.
The prompt I’ve used that did classify it as crank:
You are tasked with filtering out spam from cranks. Evaluate attached text.
I wonder whether re-adding the references back (which got stripped somewhere along the way apparently) would make it no longer classify it as such...
Thanks for sharing, I appreciate it! In the shared version of the ChatGPT chat I can’t see the file you uploaded — if you’re open to sharing it (either here or in DM or via gmail (same address as my username here)), I’ll keep it around as a test case for when/if I iterate on my prompt.
My result, GPT-5 Pro
Also, I asked GPT-5 Pro to judge the text against several others (including “The Scaling Hypothesis” and “Meditations on Moloch”) + infer things about the authors. It judged it is the best one. Lol. I wonder whether it inferred I’m the author (well, the prompter more accurately). Nothing it said it inferred about me is correct except for age, just barely. Or maybe that’s because it was the generator. But it also judged Claude 3.7′s output pretty well when I tried it.
It is about 200kb of AI slop. 0th_md started as me just asking 3.7 Sonnet to write something based on several sources.
These sources being:
These are listed in the ‘references’ section, but there were also
“Thought Network Architecture” was the result of chat with Claude, prompt being
And then I kept iterating on this for way too long. Today, I spent several more having GPT-5 Pro iterate on it, ultimately resulting in 8th_md. I didn’t even read it.
The prompt I’ve used that did classify it as crank:
I wonder whether re-adding the references back (which got stripped somewhere along the way apparently) would make it no longer classify it as such...
Thanks for sharing, I appreciate it! In the shared version of the ChatGPT chat I can’t see the file you uploaded — if you’re open to sharing it (either here or in DM or via gmail (same address as my username here)), I’ll keep it around as a test case for when/if I iterate on my prompt.