In particular, even if the LLM were being continually trained (in a way that’s similar to how LLMs are already trained, with similar architecture), it still wouldn’t do the thing humans do with quickly picking up new analogies, quickly creating new concepts, and generally reforging concepts.
Is this true? How do you know? (I assume there’s some facts here about in-context learning that I just happen to not know.)
It seems like eg I can teach an LLM a new game in one session, and it will operate within the rules of that game.
Is this true? How do you know? (I assume there’s some facts here about in-context learning that I just happen to not know.)
It seems like eg I can teach an LLM a new game in one session, and it will operate within the rules of that game.
I won’t give why I think this, but I’ll give another reason that should make you more seriously consider this: their sample complexity sucks.