It doesn’t take much near-thinking to draw a distinction between “signals to our brain that are indicative of damage inflicted to a body part” on the one hand, and “the realization that major portions of our life plans have to be scrapped in consequence of damaged body parts” on the other. The former only requires a nervous system, the latter requires the sort of nervous system that makes and cares about plans.
Yes, but that assumes this difference is favorable to your hypothesis. David Foster Wallace from “Consider The Lobster”:
Lobsters do not, on the other hand, appear to have the equipment for making or absorbing natural opioids like endorphins and enkephalins, which are what more advanced nervous systems use to try to handle intense pain. From this fact, though, one could conclude either that lobsters are maybe even more vulnerable to pain, since they lack mammalian nervous systems’ built-in analgesia, or, instead, that the absence of natural opioids implies an absence of the really intense pain-sensations that natural opioids are designed to mitigate. I for one can detect a marked upswing in mood as I contemplate this latter possibility...
The entire article is here and that particular passage is here. And later:
Still, after all the abstract intellection, there remain the facts of the frantically clanking lid, the pathetic clinging to the edge of the pot. Standing at the stove, it is hard to deny in any meaningful way that this is a living creature experiencing pain and wishing to avoid/escape the painful experience. To my lay mind, the lobster’s behavior in the kettle appears to be the expression of a preference; and it may well be that an ability to form preferences is the decisive criterion for real suffering.
In this last paragraph (which btw is immediately preceded, in the article, by an observation strikingly similar to mine in the grandparent), I would argue that “frantically” and “pathetic” are projections: the emotions they refer to originate in the viewer’s mind, not in the lobster’s.
We are demonstrably equipped with mental mechanisms whereby we can observe behaviour in others, and as a result of such observations we can experience “ascribed emotions”, which can sometimes take on an intensity not far removed from the sensations that originate in ourselves. That’s where our intuition that the lobster is in pain comes from.
Later in the article, the author argues that lobsters “are known to exhibit preferences”. Well, plants are known to exhibit preferences; they will for instance move so as to face the sun. We do not infer that plants can experience suffering.
We could build a robot today that would sense aspects of its surrounding such as elevated temperature, and we could program that robot to give a higher priority to its “get the hell away from here” program when such conditions obtained. We would then be in a position to observe the robot doing the same thing as the lobster; we would, quite possibly, experience empathy with the robot. But we would not, I think, conclude that it is morally wrong to put the robot in boiling water. We would say that’s a mistake, because we have not built into the robot the degree of personhood which would entitle it to such conclusions.
Trust this community to connect the idea to the reference so quickly. “In Hofstadter we trust” :-)
For those who are not helped by the citation, it turns out that someone thoughtfully posted the relevant quote from the book on their website. I recommend reading it, the story is philosophically interesting and emotionally compelling.
The story was also dramatized in a segment of the movie Victim of the Brain, which is available in its entirety from Google Video. The relevant part begins at around 8:40.
Here is the description of the movie:
1988 docudrama about “the ideas of Douglas Hofstadter”. It was created by Dutch director Piet Hoenderdos. Features interviews with Douglas Hofstadter and Dan Dennett. Dennett also stars as himself. Original acquired from the Center for Research in Concepts and Cognition at Indiana University. Uploaded with permission from Douglas Hofstadter. Uploaded by Virgil Griffith.
That was fascinating. A lot of the point of the story—the implicit claim—was that you’d feel for an entity based on the way its appearance and behavior connected to your sympathy—like crying sounds eliciting pity.
In text that’s not so hard because you can write things like “a shrill noise like a cry of fright” when the simple robot dodges a hammer. The text used to explain the sound are automatically loaded with mental assumptions about “fright”, simply to convey the sound to the reader.
With video the challenge seems like it would be much harder. It becomes more possible that people would feel nothing for some reason. Perhaps for technical reasons of video quality or bad acting, or for reasons more specific to the viewer (desensitized to video violence?), or maybe because the implicit theory about how mind-attribution is elicited is simply false.
Watching it turned out to be interesting on more levels than I’d have thought because I did feel things, but I also noticed the visual tropes that are equivalent to mind laden text… like music playing as the robot (off camera) cries and the camera slowly pans over the wreckage of previously destroyed robots.
Also, I thought it was interesting the way they switched the roles for the naive mysterian and the philosopher of mind, with the mysterian being played by a man and the philosopher being played by a woman… with her hair pinned up, scary eye shadow, and black stockings.
Some Jainists and Buddhists infer that plants can experience suffering. The stricter Jainist diet avoids vegetables that are harvested by killing plants, like carrots and potatoes, in favor of fruits and grains that come voluntarily or from already-dead plants.
I don’t mean to suggest that plants are clearly sentient, just that it’s plausible, even for a human, to have a coherent value system which attempts to avoid the suffering of anything which exhibits preferences.
I’d agree with that sentence if you replaced the word “suffering”, unsuitable because of its complex connotations, with “killing”, which seems adequate to capture the Jainists’ intuitions as represented in the link above.
Although it is relevant to note that the motive may be to avoid suffering—I wasn’t there when the doctrine was formed, and haven’t read the relevant texts, but it is possible that the presence of apparent preferences was interpreted as implying thus.
It doesn’t take much near-thinking to draw a distinction between “signals to our brain that are indicative of damage inflicted to a body part” on the one hand, and “the realization that major portions of our life plans have to be scrapped in consequence of damaged body parts” on the other. The former only requires a nervous system, the latter requires the sort of nervous system that makes and cares about plans.
Yes, but that assumes this difference is favorable to your hypothesis. David Foster Wallace from “Consider The Lobster”:
The entire article is here and that particular passage is here. And later:
In this last paragraph (which btw is immediately preceded, in the article, by an observation strikingly similar to mine in the grandparent), I would argue that “frantically” and “pathetic” are projections: the emotions they refer to originate in the viewer’s mind, not in the lobster’s.
We are demonstrably equipped with mental mechanisms whereby we can observe behaviour in others, and as a result of such observations we can experience “ascribed emotions”, which can sometimes take on an intensity not far removed from the sensations that originate in ourselves. That’s where our intuition that the lobster is in pain comes from.
Later in the article, the author argues that lobsters “are known to exhibit preferences”. Well, plants are known to exhibit preferences; they will for instance move so as to face the sun. We do not infer that plants can experience suffering.
We could build a robot today that would sense aspects of its surrounding such as elevated temperature, and we could program that robot to give a higher priority to its “get the hell away from here” program when such conditions obtained. We would then be in a position to observe the robot doing the same thing as the lobster; we would, quite possibly, experience empathy with the robot. But we would not, I think, conclude that it is morally wrong to put the robot in boiling water. We would say that’s a mistake, because we have not built into the robot the degree of personhood which would entitle it to such conclusions.
cf. “The Soul of the Mark III Beast”, Terrel Miedaner, included in The Mind’s I, Dennett & Hofstadter.
Trust this community to connect the idea to the reference so quickly. “In Hofstadter we trust” :-)
For those who are not helped by the citation, it turns out that someone thoughtfully posted the relevant quote from the book on their website. I recommend reading it, the story is philosophically interesting and emotionally compelling.
The story was also dramatized in a segment of the movie Victim of the Brain, which is available in its entirety from Google Video. The relevant part begins at around 8:40.
Here is the description of the movie:
That was fascinating. A lot of the point of the story—the implicit claim—was that you’d feel for an entity based on the way its appearance and behavior connected to your sympathy—like crying sounds eliciting pity.
In text that’s not so hard because you can write things like “a shrill noise like a cry of fright” when the simple robot dodges a hammer. The text used to explain the sound are automatically loaded with mental assumptions about “fright”, simply to convey the sound to the reader.
With video the challenge seems like it would be much harder. It becomes more possible that people would feel nothing for some reason. Perhaps for technical reasons of video quality or bad acting, or for reasons more specific to the viewer (desensitized to video violence?), or maybe because the implicit theory about how mind-attribution is elicited is simply false.
Watching it turned out to be interesting on more levels than I’d have thought because I did feel things, but I also noticed the visual tropes that are equivalent to mind laden text… like music playing as the robot (off camera) cries and the camera slowly pans over the wreckage of previously destroyed robots.
Also, I thought it was interesting the way they switched the roles for the naive mysterian and the philosopher of mind, with the mysterian being played by a man and the philosopher being played by a woman… with her hair pinned up, scary eye shadow, and black stockings.
“She’s a witch! Burn her!”
Some Jainists and Buddhists infer that plants can experience suffering. The stricter Jainist diet avoids vegetables that are harvested by killing plants, like carrots and potatoes, in favor of fruits and grains that come voluntarily or from already-dead plants.
That’s a preference of theirs; fine by me, but not obviously evidence-based.
I don’t mean to suggest that plants are clearly sentient, just that it’s plausible, even for a human, to have a coherent value system which attempts to avoid the suffering of anything which exhibits preferences.
I’d agree with that sentence if you replaced the word “suffering”, unsuitable because of its complex connotations, with “killing”, which seems adequate to capture the Jainists’ intuitions as represented in the link above.
Although it is relevant to note that the motive may be to avoid suffering—I wasn’t there when the doctrine was formed, and haven’t read the relevant texts, but it is possible that the presence of apparent preferences was interpreted as implying thus.