My best guess is: you thought that I was making a strong claim that there is no aspect of LLMs that resembles any aspect of human brains. But I didn’t say that (and don’t believe it). LLMs have lots of properties. Some of those LLM properties are similar to properties of human brains. Others are not. And I’m saying that “the magical transmutation of observations into behavior” is in the latter category.
Or maybe you’re saying that human hallucinations involve the “the magical transmutation of observations into behavior”? But they don’t, right? If a person hears a hallucinated voice saying “you are Jesus”, the person doesn’t reflexively and universally start saying “you are Jesus” to other people. If a person sees hallucinated flashing lights, they don’t, umm, I guess, turn their body into flashing lights? That idea doesn’t even make sense. And that’s my point. Humans can’t just cleanly map observations (hallucinated or not) onto behaviors in the way that LLMs can.
Or maybe you’re saying that human hallucinations involve the “the magical transmutation of observations into behavior”?
Right! Eh, maybe “observations into predictions into sensations” rather than “observations into behavior;” and “asking if you think” rather than “saying;” and really I’m thinking more about dreams than hallucinations, and just hoping that my understanding of one carries over to the other. (I acknowledge that my understanding of dreams, hallucinations, or both could be way off!) Joey Marcellino’s comment said it better, and you left a good response there.
I find your comment kinda confusing.
My best guess is: you thought that I was making a strong claim that there is no aspect of LLMs that resembles any aspect of human brains. But I didn’t say that (and don’t believe it). LLMs have lots of properties. Some of those LLM properties are similar to properties of human brains. Others are not. And I’m saying that “the magical transmutation of observations into behavior” is in the latter category.
Or maybe you’re saying that human hallucinations involve the “the magical transmutation of observations into behavior”? But they don’t, right? If a person hears a hallucinated voice saying “you are Jesus”, the person doesn’t reflexively and universally start saying “you are Jesus” to other people. If a person sees hallucinated flashing lights, they don’t, umm, I guess, turn their body into flashing lights? That idea doesn’t even make sense. And that’s my point. Humans can’t just cleanly map observations (hallucinated or not) onto behaviors in the way that LLMs can.
Hope that helps.
Right! Eh, maybe “observations into predictions into sensations” rather than “observations into behavior;” and “asking if you think” rather than “saying;” and really I’m thinking more about dreams than hallucinations, and just hoping that my understanding of one carries over to the other. (I acknowledge that my understanding of dreams, hallucinations, or both could be way off!) Joey Marcellino’s comment said it better, and you left a good response there.