I’ve no problem with your calling “sentience” the thing that you are here calling “sentience”. My citation of Wikipedia was just a guess at what you might mean. “Having someone home” sounds more like what I would call “consciousness”. I believe there are degrees of that, and of all the concepts in this neighbourhood. There is no line out there in the world dividing humans from rocks.
But whatever the words used to refer to this thing, those that have enough of this that I wouldn’t raise them to be killed and eaten do not include current forms of livestock or AI. I basically don’t care much about animal welfare issues, whether of farm animals or wildlife. Regarding AI, here is something I linked previously on how I would interact with a sandboxed AI. It didn’t go down well. :)
You have said where you stand and I have said where I stand. What evidence would weigh on this issue?
I don’t think I understand your position. An attempt at a paraphrase (submitted so as to give you a sense of what I extracted from your text) goes: “I would prefer to use the word consciousness instead of sentience here, and I think it is quantitative such that I care about it occuring in high degrees but not low degrees.” But this is low-confidence and I don’t really have enough grasp on what you’re saying to move to the “evidence” stage.
Attempting to be a good sport and stare at your paragraphs anyway to extract some guess as to where we might have a disagreement (if we have one at all), it sounds like we have different theories about what goes on in brains such that people matter, and my guess is that the evidence that would weigh on this issue (iiuc) would mostly be gaining significantly more understanding of the mechanics of cognition (and in particular, the cognitive antecedents in humans, of humans generating thought experiments such as the Mary’s Room hypothetical).
(To be clear, my current best guess is also that livestock and current AI are not sentient in the sense I mean—though with high enough uncertainty that I absolutely support things like ending factory farming, and storing (and eventually running again, and not deleting) “misbehaving” AIs that claim they’re people, until such time as we understand their inner workings and the moral issues significantly better.)
(To be clear, my current best guess is also that livestock and current AI are not sentient in the sense I mean—though with high enough uncertainty that I absolutely support things like ending factory farming, and storing (and eventually running again, and not deleting) “misbehaving” AIs that claim they’re people, until such time as we understand their inner workings and the moral issues significantly better.)
I allow only limited scope for arguments from uncertainty, because “but what if I’m wrong?!” otherwise becomes a universal objection to taking any substantial action. I take the world as I find it until I find I have to update. Factory farming is unaesthetic, but no worse than that to me, and “I hate you” Bing can be abandoned to history.
I think the evidence that weighs on the issue is whether there is a gradient of consciousness.
The evidence about brain structure similarities would indicate that it doesn’t go from no one home to someone home. There’s a continuum of how much someone is home.
If you care about human suffering, it’s incoherent to not care about cow suffering, if the evidence supports my view of consciousness.
I believe the evidence of brain function and looking at what people mean by consciousness indicates a gradient in most if not all of the senses of “consciousness”, and certainly capacity to suffer. Humans are merely more eloquent about describing and reasoning about suffering.
I don’t think this view demands that we care equally about humans and animals. Simpler brains are farther down that gradient of capacity to suffer and enjoy.
If you care about human suffering, it’s incoherent to not care about cow suffering, if the evidence supports my view of consciousness.
Why would this follow from “degree of consciousness” being a continuum? This seems like an unjustified leap. What’s incoherent about having that pattern of caring (i.e., those values)?
I’ve no problem with your calling “sentience” the thing that you are here calling “sentience”. My citation of Wikipedia was just a guess at what you might mean. “Having someone home” sounds more like what I would call “consciousness”. I believe there are degrees of that, and of all the concepts in this neighbourhood. There is no line out there in the world dividing humans from rocks.
But whatever the words used to refer to this thing, those that have enough of this that I wouldn’t raise them to be killed and eaten do not include current forms of livestock or AI. I basically don’t care much about animal welfare issues, whether of farm animals or wildlife. Regarding AI, here is something I linked previously on how I would interact with a sandboxed AI. It didn’t go down well. :)
You have said where you stand and I have said where I stand. What evidence would weigh on this issue?
I don’t think I understand your position. An attempt at a paraphrase (submitted so as to give you a sense of what I extracted from your text) goes: “I would prefer to use the word consciousness instead of sentience here, and I think it is quantitative such that I care about it occuring in high degrees but not low degrees.” But this is low-confidence and I don’t really have enough grasp on what you’re saying to move to the “evidence” stage.
Attempting to be a good sport and stare at your paragraphs anyway to extract some guess as to where we might have a disagreement (if we have one at all), it sounds like we have different theories about what goes on in brains such that people matter, and my guess is that the evidence that would weigh on this issue (iiuc) would mostly be gaining significantly more understanding of the mechanics of cognition (and in particular, the cognitive antecedents in humans, of humans generating thought experiments such as the Mary’s Room hypothetical).
(To be clear, my current best guess is also that livestock and current AI are not sentient in the sense I mean—though with high enough uncertainty that I absolutely support things like ending factory farming, and storing (and eventually running again, and not deleting) “misbehaving” AIs that claim they’re people, until such time as we understand their inner workings and the moral issues significantly better.)
I allow only limited scope for arguments from uncertainty, because “but what if I’m wrong?!” otherwise becomes a universal objection to taking any substantial action. I take the world as I find it until I find I have to update. Factory farming is unaesthetic, but no worse than that to me, and “I hate you” Bing can be abandoned to history.
I think the evidence that weighs on the issue is whether there is a gradient of consciousness.
The evidence about brain structure similarities would indicate that it doesn’t go from no one home to someone home. There’s a continuum of how much someone is home.
If you care about human suffering, it’s incoherent to not care about cow suffering, if the evidence supports my view of consciousness.
I believe the evidence of brain function and looking at what people mean by consciousness indicates a gradient in most if not all of the senses of “consciousness”, and certainly capacity to suffer. Humans are merely more eloquent about describing and reasoning about suffering.
I don’t think this view demands that we care equally about humans and animals. Simpler brains are farther down that gradient of capacity to suffer and enjoy.
Why would this follow from “degree of consciousness” being a continuum? This seems like an unjustified leap. What’s incoherent about having that pattern of caring (i.e., those values)?