A possible explanation: both brains and LLMs are somehow solving the symbol grounding problem. It may be that the most natural solutions to this problem share commonalities, or even that all solutions are necessarily isomorphic to each other.
Anyone who has played around with LLMs for a while can see that they are not just “stochastic parrots”, but I think it’s a pretty big leap to call anything within them “human-like” or “brain-like”.
If an AI (perhaps a GOFAI or just an ordinary computer program) implements addition using the standard algorithm for multi-digit addition that humans learn in elementary school, does that make the AI human-like? Maybe a little, but it seems less misleading to say that the method itself is just a natural way of solving the same underlying problem. The fact that AIs are becoming capable of solving more complex problems that were previously only solvable by human brains seems more like a fact about a general increase in AI capabilities, than a result of AI systems getting more “brain-like”.
To say that any system which solves a problem via similar methods to humans is brain-like, seems like it is unfairly privileging the specialness / uniqueness of the brain. Claims like that (IMO wrongly) suggestively imply that those solutions somehow “belong” to the brain, simply because that is where we first observed them.
To say that any system which solves a problem via similar methods to humans is brain-like, seems like it is unfairly privileging the specialness / uniqueness of the brain. Claims like that (IMO wrongly) suggestively imply that those solutions somehow “belong” to the brain, simply because that is where we first observed them.
The brain isn’t exactly some arbitrary set of parameters picked from a mindspace, it’s the most statistically likely general intelligence to form from evolutionary mechanisms on a mammalian brain. Presumably the processes it uses are the simplest to build bottom up so the claim is misguided but it isn’t entirely wrong.
Anyone who has played around with LLMs for a while can see that they are not just “stochastic parrots”, but I think it’s a pretty big leap to call anything within them “human-like” or “brain-like”.
To a large extent, this describes my new views on LLM capabilities, too, especially transformers. Missing important aspects of human cognition, but it’s not a useless stochastic parrot, like some of the more dismissive people claim it is.
To me, it really looks like brains and LLMs are both using embedding spaces to represent information. Embedding spaces ground symbols by automatically relating all concepts they contain, including the grammar for manipulating these concepts.
A possible explanation: both brains and LLMs are somehow solving the symbol grounding problem. It may be that the most natural solutions to this problem share commonalities, or even that all solutions are necessarily isomorphic to each other.
Anyone who has played around with LLMs for a while can see that they are not just “stochastic parrots”, but I think it’s a pretty big leap to call anything within them “human-like” or “brain-like”.
If an AI (perhaps a GOFAI or just an ordinary computer program) implements addition using the standard algorithm for multi-digit addition that humans learn in elementary school, does that make the AI human-like? Maybe a little, but it seems less misleading to say that the method itself is just a natural way of solving the same underlying problem. The fact that AIs are becoming capable of solving more complex problems that were previously only solvable by human brains seems more like a fact about a general increase in AI capabilities, than a result of AI systems getting more “brain-like”.
To say that any system which solves a problem via similar methods to humans is brain-like, seems like it is unfairly privileging the specialness / uniqueness of the brain. Claims like that (IMO wrongly) suggestively imply that those solutions somehow “belong” to the brain, simply because that is where we first observed them.
The brain isn’t exactly some arbitrary set of parameters picked from a mindspace, it’s the most statistically likely general intelligence to form from evolutionary mechanisms on a mammalian brain. Presumably the processes it uses are the simplest to build bottom up so the claim is misguided but it isn’t entirely wrong.
To a large extent, this describes my new views on LLM capabilities, too, especially transformers. Missing important aspects of human cognition, but it’s not a useless stochastic parrot, like some of the more dismissive people claim it is.
To me, it really looks like brains and LLMs are both using embedding spaces to represent information. Embedding spaces ground symbols by automatically relating all concepts they contain, including the grammar for manipulating these concepts.
There are some papers suggesting this could indeed be the case, at least for language processing e.g. Shared computational principles for language processing in humans and deep language models, Brain embeddings with shared geometry to artificial contextual embeddings, as a code for representing language in the human brain.