I think I would generally avoid saying that LLMs or current learning programs don’t generate new concepts simpliciter. Plausibly I did, but if so, I’d hopefully be able to claim that it was a typo or elision for space/clarity. What I said here was “good at generating interesting novel concepts on par with humans”. I know perfectly well that LLMs gain concepts (after a fashion) during training and have written about that. I would dispute them using / having concepts in the same relevant ways that humans have them though.
I’m confident that there’s lots of interesting content generally speaking contained in LLMs, gained through training, which is unknown to all humans. (The same could be said of other systems such as AlphaGo, and even old-style Stockfish during runtime if you admit that.)
(Though, apparently, and unlike human experts, the models don’t thereby learn words for those concepts, or have the ability to introspect and put handles on their conceptual representations, any more than I can introspect into how my visual cortex works.)
So like, yeah, they have something kinda related to human concepts in their full power, but not. This fits with my claim that they don’t have much originary general intelligence; they have distilled GI from humans, some more distilled stuff that’s not exactly “knowledge from humans” but is kinda more narrow (like, LLMs know word collocation frequencies like no human does); and some other stuff that’s not very general. I posit.
I think I would generally avoid saying that LLMs or current learning programs don’t generate new concepts simpliciter. Plausibly I did, but if so, I’d hopefully be able to claim that it was a typo or elision for space/clarity. What I said here was “good at generating interesting novel concepts on par with humans”. I know perfectly well that LLMs gain concepts (after a fashion) during training and have written about that. I would dispute them using / having concepts in the same relevant ways that humans have them though.
I’m confident that there’s lots of interesting content generally speaking contained in LLMs, gained through training, which is unknown to all humans. (The same could be said of other systems such as AlphaGo, and even old-style Stockfish during runtime if you admit that.)
So like, yeah, they have something kinda related to human concepts in their full power, but not. This fits with my claim that they don’t have much originary general intelligence; they have distilled GI from humans, some more distilled stuff that’s not exactly “knowledge from humans” but is kinda more narrow (like, LLMs know word collocation frequencies like no human does); and some other stuff that’s not very general. I posit.