As another aside, this gives a rather natural way to view the difference between humans and current AI. An AI lacks “understanding” not because it is merely playing a language game (and certainly not because it runs on a silicon substrate), but because it doesn’t engage in model predictive controlling. Rather, its statistical models are more like the passive rock in the stream example earlier. It mirrors its environment without actively engaging it like a living MPC.
That’s entirely unconnected to understanding (if that were the case, a locked-in person who can only communicate by moving their eyes wouldn’t understand anything).
A locked-in person communicating only by eye movement would understand perfectly well in my account. If they’re alive, their metabolism keeps their body, brain and mind constantly interacting with their environment. This holds even if they’re asleep or in a coma. My point (which is really just orthodox biology) is that human language processing results from metabolic processes (what I’ve called model predictive controlling to highlight its modeling character), and that includes what we call syntax and semantics—our sense of “understanding”.
If they’re alive, their metabolism keeps their body, brain and mind constantly interacting with their environment. This holds even if they’re asleep or in a coma.
When asleep or in a coma, the mind doesn’t interact with the environment at all.
Also, this can’t be one of the requirements for understanding, because there is no conceptual connection between understanding something and interacting with the environment continuously (to the extent to which getting information through neural spikes can be approximated as continuous interaction with the environment) (rather than discretely).
Also, you can have an embodied language model that accepts information from the environment continuously (so that language model would then possess understanding).
My point (which is really just orthodox biology) is that human language processing results from metabolic processes (what I’ve called model predictive controlling to highlight its modeling character)
That confuses causality with necessity (metabolism causally preceding understanding doesn’t mean that metabolism or continuous input are necessary for it).
I’m not sure if we’re talking past each other or if there is genuine disagreement, but I’ll expound a bit.
When asleep or in a coma, the mind doesn’t interact with the environment at all.
The sleeping/comatose mind does interact constantly with the environment in two ways. For starters, it’s well established that external sensory input (specifically sounds and touch) regularly makes its way into the conscious experience of dreaming and comatose state. But that’s just a side issue here. At a more fundamental level, every living thing interacts 24⁄7 with its environment through its metabolism.
That confuses causality with necessity (metabolism causally preceding understanding doesn’t mean that metabolism or continuous input are necessary for it).
Maybe this is the crux of a misunderstanding. I don’t claim that “continuous input” in the sense you (seem to) mean is necessary/causally antecedent to semantics. E.g., I’m not saying that I have to constantly look at a tree out in the woods in order to think about what a tree is. I’m only saying that any thought I have, and whatever language and semantics attached to it, are the result (causal/necessity if you like) of my metabolic processing. (Using metabolism in the broadest sense to mean any chemical pathways that use energy and produces entropy in the body, which includes neural activity). If that’s not the case, then something non-biological makes human language possible, which I assume you don’t intend. Either way, that would be a hypothesis for a different type of discussion forum.
That’s entirely unconnected to understanding (if that were the case, a locked-in person who can only communicate by moving their eyes wouldn’t understand anything).
A locked-in person communicating only by eye movement would understand perfectly well in my account. If they’re alive, their metabolism keeps their body, brain and mind constantly interacting with their environment. This holds even if they’re asleep or in a coma. My point (which is really just orthodox biology) is that human language processing results from metabolic processes (what I’ve called model predictive controlling to highlight its modeling character), and that includes what we call syntax and semantics—our sense of “understanding”.
When asleep or in a coma, the mind doesn’t interact with the environment at all.
Also, this can’t be one of the requirements for understanding, because there is no conceptual connection between understanding something and interacting with the environment continuously (to the extent to which getting information through neural spikes can be approximated as continuous interaction with the environment) (rather than discretely).
Also, you can have an embodied language model that accepts information from the environment continuously (so that language model would then possess understanding).
That confuses causality with necessity (metabolism causally preceding understanding doesn’t mean that metabolism or continuous input are necessary for it).
I’m not sure if we’re talking past each other or if there is genuine disagreement, but I’ll expound a bit.
The sleeping/comatose mind does interact constantly with the environment in two ways. For starters, it’s well established that external sensory input (specifically sounds and touch) regularly makes its way into the conscious experience of dreaming and comatose state. But that’s just a side issue here. At a more fundamental level, every living thing interacts 24⁄7 with its environment through its metabolism.
Maybe this is the crux of a misunderstanding. I don’t claim that “continuous input” in the sense you (seem to) mean is necessary/causally antecedent to semantics. E.g., I’m not saying that I have to constantly look at a tree out in the woods in order to think about what a tree is. I’m only saying that any thought I have, and whatever language and semantics attached to it, are the result (causal/necessity if you like) of my metabolic processing. (Using metabolism in the broadest sense to mean any chemical pathways that use energy and produces entropy in the body, which includes neural activity). If that’s not the case, then something non-biological makes human language possible, which I assume you don’t intend. Either way, that would be a hypothesis for a different type of discussion forum.