whereas the competent behavior we see in LLMs today is instead determined largely by imitative learning, which I re-dub “the magical transmutation of observations into behavior” to remind us that it is a strange algorithmic mechanism, quite unlike anything in human brains and behavior.
And yet...
Well, I don’t know the history, but I think calling it “hallucination” is reasonable in light of the fact that “LLM pretraining magically transmutes observations into behavior”. Thus, you can interpret LLM base model outputs as kinda “what the LLM thinks that the input distribution is”. And from that perspective, it really is more “hallucination” than “confabulation”!
But hallucination is “anything in human brains,” isn’t it?
My best guess is: you thought that I was making a strong claim that there is no aspect of LLMs that resembles any aspect of human brains. But I didn’t say that (and don’t believe it). LLMs have lots of properties. Some of those LLM properties are similar to properties of human brains. Others are not. And I’m saying that “the magical transmutation of observations into behavior” is in the latter category.
Or maybe you’re saying that human hallucinations involve the “the magical transmutation of observations into behavior”? But they don’t, right? If a person hears a hallucinated voice saying “you are Jesus”, the person doesn’t reflexively and universally start saying “you are Jesus” to other people. If a person sees hallucinated flashing lights, they don’t, umm, I guess, turn their body into flashing lights? That idea doesn’t even make sense. And that’s my point. Humans can’t just cleanly map observations (hallucinated or not) onto behaviors in the way that LLMs can.
Or maybe you’re saying that human hallucinations involve the “the magical transmutation of observations into behavior”?
Right! Eh, maybe “observations into predictions into sensations” rather than “observations into behavior;” and “asking if you think” rather than “saying;” and really I’m thinking more about dreams than hallucinations, and just hoping that my understanding of one carries over to the other. (I acknowledge that my understanding of dreams, hallucinations, or both could be way off!) Joey Marcellino’s comment said it better, and you left a good response there.
And yet...
But hallucination is “anything in human brains,” isn’t it?
I find your comment kinda confusing.
My best guess is: you thought that I was making a strong claim that there is no aspect of LLMs that resembles any aspect of human brains. But I didn’t say that (and don’t believe it). LLMs have lots of properties. Some of those LLM properties are similar to properties of human brains. Others are not. And I’m saying that “the magical transmutation of observations into behavior” is in the latter category.
Or maybe you’re saying that human hallucinations involve the “the magical transmutation of observations into behavior”? But they don’t, right? If a person hears a hallucinated voice saying “you are Jesus”, the person doesn’t reflexively and universally start saying “you are Jesus” to other people. If a person sees hallucinated flashing lights, they don’t, umm, I guess, turn their body into flashing lights? That idea doesn’t even make sense. And that’s my point. Humans can’t just cleanly map observations (hallucinated or not) onto behaviors in the way that LLMs can.
Hope that helps.
Right! Eh, maybe “observations into predictions into sensations” rather than “observations into behavior;” and “asking if you think” rather than “saying;” and really I’m thinking more about dreams than hallucinations, and just hoping that my understanding of one carries over to the other. (I acknowledge that my understanding of dreams, hallucinations, or both could be way off!) Joey Marcellino’s comment said it better, and you left a good response there.