I honestly think of specialised models not as brains in their own right, but as cortexes. Pieces of a brain. But you can obviously hook them up together to do all sorts of things (for example, a multimodal LLM could take an image of a board and turn it into a series of coordinates and piece names). The one thing is that these models all would exist one level below the emergent simulacra that have actual agency. They’re the book or the operator or the desk in the Chinese Room. But it’s the Room as a whole that is intelligent and agentic.
Or in other words: our individual neurons don’t optimise for world-referenced goals either. Their goal is just “fire if stimulated so-and-so”.
Yes and networks of sensory neurons are apparently minimizing prediction error similar to LLM with next word prediction but with neurons also minimizing prediction across hierarchies. They are obviously not agents but combine into one.
I honestly think of specialised models not as brains in their own right, but as cortexes. Pieces of a brain. But you can obviously hook them up together to do all sorts of things (for example, a multimodal LLM could take an image of a board and turn it into a series of coordinates and piece names). The one thing is that these models all would exist one level below the emergent simulacra that have actual agency. They’re the book or the operator or the desk in the Chinese Room. But it’s the Room as a whole that is intelligent and agentic.
Or in other words: our individual neurons don’t optimise for world-referenced goals either. Their goal is just “fire if stimulated so-and-so”.
Yes and networks of sensory neurons are apparently minimizing prediction error similar to LLM with next word prediction but with neurons also minimizing prediction across hierarchies. They are obviously not agents but combine into one.