I was able to follow this explanation (as well as the rest of your post) without seeing your physical body in any way. … The fact that we can do this looks to me like evidence against your main thesis.
Ah, but you’re assuming that this particular interaction stands on its own. I’ll bet you were able to visualize the described gestures just fine by invoking memories of past interactions with bodies in the world.
Two points. First, I don’t contest the existence of verbal labels that merely refer—or even just register as being invoked without refering at all. As long as some labels are directly grounded to body/world, or refer to other labels that do get grounded in the body/world historically, we generally get by in routine situations. And all cultures have error detection and repair norms for conversation so that we can usually recover without social disaster.
However, the fact that verbal labels can be used without grounding them in the body/world is a problem. It is frequently the case that speakers and hearers alike don’t bother to connect words to reality, and this is a major source of misunderstanding, error, and nonsense. In our own case here and now, we are actually failing to understand each other fully because I can’t show you actual videotapes of what I’m talking about. You are rightly skeptical because words alone aren’t good enough evidence. And that is itself evidence.
Second, humans have a developmental trajectory and history, and memories of that history. We’re a time-binding animal in Korzybski’s terminology. I would suggest that an enculturated adult native speaker of a language will have what amount to “muscle memory” tics that can be invoked as needed to create referents. Mere memory of a motion or a perception is probably sufficient.
“Oh, look, it’s an invisible gesture!” is not at all convincing, I realize, so let me summarize several lines of evidence for it.
Developmentally, there’s quite a lot of research on language acquisition in infants and young children that suggests shared attention management—through indexical pointing, and shared gaze, and physical coercion of the body, and noises that trigger attention shift—is a critical building block for constructing “aboutness” in human language. We also start out with some shared, built-in cries and facial expressions linked to emotional states. At this level of development, communication largely fails unless there is a lot of embodied scaffolding for the interaction, much of it provided by the caregiver but a large part of it provided by the physical context of the interaction. There is also some evidence from the gestural communication of apes that attests to the importance of embodied attention management in communication.
Also, co-speech gesture turns out to be a human universal. Congenitally blind children do it, having never seen gesture by anyone else. Congenitally deaf children who spend time in groups together will invent entire gestural languages complete with formal syntax, as recently happened in Nicaragua. And adults speaking on the telephone will gesture even knowing they cannot be seen. Granted, people gesture in private at a significantly lower rate than they do face-to-face, but the fact that they do it at all is a bit of a puzzle, since the gestures can’t be serving a communicative function in these contexts. Does the gesturing help the speakers actually think, or at least make meaning more clear to themselves? Susan Goldin-Meadow and her colleagues think so.
We also know from video conversation data that adults spontaneously invent new gestures all the time in conversation, then reuse them. Interestingly, though, each reuse becomes more attentuated, simplified, and stylized with repetition. Similar effects are seen in the development of sign languages and in written scripts.
But just how embodied can a label be when gesture (and other embodied experience) is just a memory, and is so internalized that is is externally invisible? This has actually been tested experimentally. The Stroop effect has been known for decades, for example: when the word “red” is presented in blue text, it is read or acted on more slowly than when the word “red” is presented in red text—or in socially neutral black text. That’s on the embodied perception side of things. But more recent psychophysical experiments have demonstrated a similar psychomotor Stroop-like effect when spatial and motion stimulus sentences are semantically congruent with the direction of the required response action. This effect holds even for metaphorical words like “give”, which tests as motor-congruent with motion away from oneself, and “take”, which tests as motor-congruent with motion toward oneself.
I understand how counterintuitive this stuff can be when you first encounter it—especially to intelligent folks who work with codes or words or models a great deal. I expect the two of us will never reach a consensus on this without looking at a lot of original data—and who has the time to analyze all the data that exists on all the interesting problems in the world? I’d be pleased if you could just note for future reference that a body of empirical evidence exists for the claim. That’s all.
In our own case here and now, we are actually failing to understand each other fully because I can’t show you actual videotapes of what I’m talking about.
What do you mean by “fully” ? I believe I understand you well enough for all practical purposes. I don’t agree with you, but agreement and understanding are two different things.
First, I don’t contest the existence of verbal labels that merely refer—or even just register as being invoked without refering at all.
I’m not sure what you mean by “merely refer”, but keep in mind that we humans are able to communicate concepts which have no physical analogues that would be immediately accessible to our senses. For example, we can talk about things like “O(N)”, or “ribosome”, or “a^n +b^n = c^n”. We can also talk about entirely imaginary worlds, such as f.ex. the world where Mario, the turtle-crushing plumber, lives. And we can do this without having any “physical context” for the interaction, too.
All that is beside the point, however. In the rest of your post, you bring up a lot of evidence in support of your model of human development. That’s great, but your original claim was that any type of intelligence at all will require a physical body in order to develop; and nothing you’ve said so far is relevant to this claim. True, human intelligence is the only kind we know of so far, but then, at one point birds and insects were the only self-propelled flyers in existence—and that’s not the case anymore.
Furthermore, your also claimed that no simulation, no matter how realistic, will serve to replace the physical world for the purposes of human development, and I’m still not convinced that this is true, either. As I’d said before, we humans do not have perfect senses; if physical coordinates of real objects were snapped to a 0.01mm grid, no human child would ever notice. And in fact, there are plenty of humans who grow up and develop language just fine without the ability to see colors, or to move some of their limbs in order to point at things.
Just to drive the point home: even if I granted all of your arguments regarding humans, you would still need to demonstrate that human intelligence is the only possible kind of intelligence; that growing up in a human body is the only possible way to develop human intelligence; and that no simulation could in principle suffice, and the body must be physical. These are all very strong claims, and so far you have provided no evidence for any of them.
Let me refer you to Computation and Human Experience, by Philip E. Agre, and to Understanding Computers and Cognition, by Terry Winograd and Fernando Flores.
Ah, but you’re assuming that this particular interaction stands on its own. I’ll bet you were able to visualize the described gestures just fine by invoking memories of past interactions with bodies in the world.
Two points. First, I don’t contest the existence of verbal labels that merely refer—or even just register as being invoked without refering at all. As long as some labels are directly grounded to body/world, or refer to other labels that do get grounded in the body/world historically, we generally get by in routine situations. And all cultures have error detection and repair norms for conversation so that we can usually recover without social disaster.
However, the fact that verbal labels can be used without grounding them in the body/world is a problem. It is frequently the case that speakers and hearers alike don’t bother to connect words to reality, and this is a major source of misunderstanding, error, and nonsense. In our own case here and now, we are actually failing to understand each other fully because I can’t show you actual videotapes of what I’m talking about. You are rightly skeptical because words alone aren’t good enough evidence. And that is itself evidence.
Second, humans have a developmental trajectory and history, and memories of that history. We’re a time-binding animal in Korzybski’s terminology. I would suggest that an enculturated adult native speaker of a language will have what amount to “muscle memory” tics that can be invoked as needed to create referents. Mere memory of a motion or a perception is probably sufficient.
“Oh, look, it’s an invisible gesture!” is not at all convincing, I realize, so let me summarize several lines of evidence for it.
Developmentally, there’s quite a lot of research on language acquisition in infants and young children that suggests shared attention management—through indexical pointing, and shared gaze, and physical coercion of the body, and noises that trigger attention shift—is a critical building block for constructing “aboutness” in human language. We also start out with some shared, built-in cries and facial expressions linked to emotional states. At this level of development, communication largely fails unless there is a lot of embodied scaffolding for the interaction, much of it provided by the caregiver but a large part of it provided by the physical context of the interaction. There is also some evidence from the gestural communication of apes that attests to the importance of embodied attention management in communication.
Also, co-speech gesture turns out to be a human universal. Congenitally blind children do it, having never seen gesture by anyone else. Congenitally deaf children who spend time in groups together will invent entire gestural languages complete with formal syntax, as recently happened in Nicaragua. And adults speaking on the telephone will gesture even knowing they cannot be seen. Granted, people gesture in private at a significantly lower rate than they do face-to-face, but the fact that they do it at all is a bit of a puzzle, since the gestures can’t be serving a communicative function in these contexts. Does the gesturing help the speakers actually think, or at least make meaning more clear to themselves? Susan Goldin-Meadow and her colleagues think so.
We also know from video conversation data that adults spontaneously invent new gestures all the time in conversation, then reuse them. Interestingly, though, each reuse becomes more attentuated, simplified, and stylized with repetition. Similar effects are seen in the development of sign languages and in written scripts.
But just how embodied can a label be when gesture (and other embodied experience) is just a memory, and is so internalized that is is externally invisible? This has actually been tested experimentally. The Stroop effect has been known for decades, for example: when the word “red” is presented in blue text, it is read or acted on more slowly than when the word “red” is presented in red text—or in socially neutral black text. That’s on the embodied perception side of things. But more recent psychophysical experiments have demonstrated a similar psychomotor Stroop-like effect when spatial and motion stimulus sentences are semantically congruent with the direction of the required response action. This effect holds even for metaphorical words like “give”, which tests as motor-congruent with motion away from oneself, and “take”, which tests as motor-congruent with motion toward oneself.
I understand how counterintuitive this stuff can be when you first encounter it—especially to intelligent folks who work with codes or words or models a great deal. I expect the two of us will never reach a consensus on this without looking at a lot of original data—and who has the time to analyze all the data that exists on all the interesting problems in the world? I’d be pleased if you could just note for future reference that a body of empirical evidence exists for the claim. That’s all.
What do you mean by “fully” ? I believe I understand you well enough for all practical purposes. I don’t agree with you, but agreement and understanding are two different things.
I’m not sure what you mean by “merely refer”, but keep in mind that we humans are able to communicate concepts which have no physical analogues that would be immediately accessible to our senses. For example, we can talk about things like “O(N)”, or “ribosome”, or “a^n +b^n = c^n”. We can also talk about entirely imaginary worlds, such as f.ex. the world where Mario, the turtle-crushing plumber, lives. And we can do this without having any “physical context” for the interaction, too.
All that is beside the point, however. In the rest of your post, you bring up a lot of evidence in support of your model of human development. That’s great, but your original claim was that any type of intelligence at all will require a physical body in order to develop; and nothing you’ve said so far is relevant to this claim. True, human intelligence is the only kind we know of so far, but then, at one point birds and insects were the only self-propelled flyers in existence—and that’s not the case anymore.
Furthermore, your also claimed that no simulation, no matter how realistic, will serve to replace the physical world for the purposes of human development, and I’m still not convinced that this is true, either. As I’d said before, we humans do not have perfect senses; if physical coordinates of real objects were snapped to a 0.01mm grid, no human child would ever notice. And in fact, there are plenty of humans who grow up and develop language just fine without the ability to see colors, or to move some of their limbs in order to point at things.
Just to drive the point home: even if I granted all of your arguments regarding humans, you would still need to demonstrate that human intelligence is the only possible kind of intelligence; that growing up in a human body is the only possible way to develop human intelligence; and that no simulation could in principle suffice, and the body must be physical. These are all very strong claims, and so far you have provided no evidence for any of them.
Let me refer you to Computation and Human Experience, by Philip E. Agre, and to Understanding Computers and Cognition, by Terry Winograd and Fernando Flores.
Can you summarize the salient parts ?