good raw ethnography (a “natural history of token abundance”), unclean economic dataset mixing together at least 6 quantities all called “tokens”
since token volume is only weakly coupled to value (Dylan Patel’s claims notwithstanding), you should instead ask “which workflows convert token abundance into durable, validated output?”
Nicholas Carlini’s Rust compiler example is good, Cloudflare too. SemiAnalysis examples all “very rhetorically potent” and misleading for cases that incur large human costs in expert validation and downstream correction
examples cluster into a few categories:
Personal friction removal: Rohit, Liu, Kyle, hobbyists
GPT-5.5 comments on the above, in my own words:
good raw ethnography (a “natural history of token abundance”), unclean economic dataset mixing together at least 6 quantities all called “tokens”
since token volume is only weakly coupled to value (Dylan Patel’s claims notwithstanding), you should instead ask “which workflows convert token abundance into durable, validated output?”
Nicholas Carlini’s Rust compiler example is good, Cloudflare too. SemiAnalysis examples all “very rhetorically potent” and misleading for cases that incur large human costs in expert validation and downstream correction
examples cluster into a few categories:
Personal friction removal: Rohit, Liu, Kyle, hobbyists
Artifact production: compiler, novel, dashboards, startup products
Scientific/research automation: AI Scientist, gene expression, Ramsey proof
Security/eval search: smart-contract exploits, OpenBSD/Mythos, ARC-AGI
Firm-scale workflow rewiring: SemiAnalysis, Cloudflare, OpenAI Finance/Comms, Axiom
Platform/macro throughput: Google, Meta, China-wide token numbers
(slop warning)
GPT-5.5, which is not at all AGI-pilled, guesses annual tokens processed through 2030: