What about infants who haven’t formed such representations or patients with severe impairment in minimally conscious states who can no longer form such representations?
I doubt infants are conscious and thus they are only indirectly morally important as in the future they will eventually become moral patients. ‘patients with severe impairment in minimally conscious states who can no longer form such representations’ People who could not form such representations would be complete vegetables, an example of such humans are people born without a cortex, and probably at least affectively speaking people with severe akinetic mutism. In the second case again like the infant, they will eventually regain such representations and so are indirectly morally relevant.
I understand this to mean that we care not only about current moral patients but also about potential ones, such as the infants and impaired patients. That would be consistent with people caring about embryos, but it would also match seeds and eggs of all kinds, which seem less clearly matching intuitions.
Do I understand correctly that, according to your criteria, people with pain asymbolia would not count as moral patients (assuming they literally never experience suffering or aversiveness to nociceptive stimuli)?
People with some sort of fictional insane pain asymbolia where they never felt any aversiveness ever wouldn’t be moral patients no, although they might have value since their family, who are moral patients, still care about them. No such people actually exist irl though, people with pain asymbolia still want things and still feel aversiveness like everyone else , they just don’t suffer from physical pains.
Wait, that doesn’t compute. Why would they become moral patients because other people care about them? That fails on two accounts at once:
The family members do not feel pain about their loved ones. I agree that they suffer, but that is not related to pain stimuli. You can have aversive feelings toward all kinds of things unrelated to nociception. Just think about salty water. You only crave it if you have too little salt, but otherwise it is yuck. Although, maybe, you mean nociception in a non-standard way.
Even if the family member’s aversiveness were sufficient it would prove too much: It would make basically any object that people care about and feel suffering when damaged or lost a moral patient.
But I like the self-representation aspect of your criterion and I think it could be fixed by reducing it to just that:
Any system that represents its own aversive responses, deserves moral patienthood.
It would require to make “represent reponse” very precise, but I think that would be possible.
Why would they become moral patients because other people care about them
Yeah sorry phrased this badly, they would have moral value as in the same way a treasured heirloom has moral value. Second-hand.
The family members do not feel pain about their loved ones. I agree that they suffer, but that is not related to pain stimuli. You can have aversive feelings toward all kinds of things unrelated to nociception. Just think about salty water. You only crave it if you have too little salt, but otherwise it is yuck. Although, maybe, you mean nociception in a non-standard way.
Ahhh I think maybe I know another big reason of why people are confused now. I used nociception in the Gilbert example, but as I mention (probably too fleetingly) in the ‘The Criterion’ part, it is about everything aversive. Aversiveness is where moral value comes from, and it is a subjective sense of aversiveness that first-order systems lack. Nociception is just one thing that typically produces aversiveness.
But I like the self-representation aspect of your criterion and I think it could be fixed by reducing it to just that:
Yes I agree, I will think of a short but maximally precise way of rephrasing it. Thank you.
Why does it matter that Gilbert infers something from the behavior of his neural network and not from the behavior of his body? Both are just subjective models of reality. Why does it matter whether he knows something about his pain? Why it doesn’t count, if Gilbert avoids pain defined as the state of neural network that causes him to avoid it, even when he doesn’t know something about it? Maybe you can model it as Gilbert himself not feeling pain, but why the neural network is not a moral patient?
Sorry I think I may have explained this badly. The point is that the neural network has no actual aversiveness in its model of the world. There’s no super meaningful difference here between the neural network and Gilbert that was never my point. The point is that gilbert is only sensitive to certain types of input, but he has no awareness of what the input does to him. Gilbert / the neural network only experiences: something happens to my body → something else happens to my body + i react a certain way, he / the network has no model of / access to why that happens, there is no actual aversiveness in the system at all, only a learnt disposition to react in certain ways in certain contexts.
Its like when a human views a subliminal stimuli, that stimuli creates a disposition to act in certain ways, but the person is not aware of their own sensitivity, and thus there is no experience of it, it is ‘subconscious’ / ‘implicit’. Gilbert / the network is the same way, he is sensitive to pain, but is not aware of the pain in the same sort of way. Does this make sense? Perhaps I will edit the post to include this explanation if that would help.
I understand that, but I’m still asking why subliminal stimuli are not morally relevant for you? They may still create disposition to act in aversive way, so there is still mechanism in some part the brain/neural network that causes this behaviour and has access to the stimulus—what’s the morally significant difference between a stimulus being in some neurons and being in others, such that you call only one location “awareness”?
There is a mechanism in the brain that has access to / represents the physical damage. There is no mechanism in the brain that has access to / represents the aversive response to the physical damage since there is no meta-representation in first-order systems. Thus not a single part of the nervous system at all represents aversiveness, it can be found nowhere in the system.
First, you can still infer meta-representation from your behavior. Second, why does it matter that you represent aversiveness, what’s the difference? Representation of aversiveness and representation of damage are both just some states of neurons that model some other neurons (representation of damage still implies possibility of modeling neurons, not only external state, because your neurons are connected to other neurons).
What about infants who haven’t formed such representations or patients with severe impairment in minimally conscious states who can no longer form such representations?
I doubt infants are conscious and thus they are only indirectly morally important as in the future they will eventually become moral patients. ‘patients with severe impairment in minimally conscious states who can no longer form such representations’ People who could not form such representations would be complete vegetables, an example of such humans are people born without a cortex, and probably at least affectively speaking people with severe akinetic mutism. In the second case again like the infant, they will eventually regain such representations and so are indirectly morally relevant.
I understand this to mean that we care not only about current moral patients but also about potential ones, such as the infants and impaired patients. That would be consistent with people caring about embryos, but it would also match seeds and eggs of all kinds, which seem less clearly matching intuitions.
Do I understand correctly that, according to your criteria, people with pain asymbolia would not count as moral patients (assuming they literally never experience suffering or aversiveness to nociceptive stimuli)?
People with some sort of fictional insane pain asymbolia where they never felt any aversiveness ever wouldn’t be moral patients no, although they might have value since their family, who are moral patients, still care about them. No such people actually exist irl though, people with pain asymbolia still want things and still feel aversiveness like everyone else , they just don’t suffer from physical pains.
Wait, that doesn’t compute. Why would they become moral patients because other people care about them? That fails on two accounts at once:
The family members do not feel pain about their loved ones. I agree that they suffer, but that is not related to pain stimuli. You can have aversive feelings toward all kinds of things unrelated to nociception. Just think about salty water. You only crave it if you have too little salt, but otherwise it is yuck. Although, maybe, you mean nociception in a non-standard way.
Even if the family member’s aversiveness were sufficient it would prove too much: It would make basically any object that people care about and feel suffering when damaged or lost a moral patient.
But I like the self-representation aspect of your criterion and I think it could be fixed by reducing it to just that:
It would require to make “represent reponse” very precise, but I think that would be possible.
Yeah sorry phrased this badly, they would have moral value as in the same way a treasured heirloom has moral value. Second-hand.
Ahhh I think maybe I know another big reason of why people are confused now. I used nociception in the Gilbert example, but as I mention (probably too fleetingly) in the ‘The Criterion’ part, it is about everything aversive. Aversiveness is where moral value comes from, and it is a subjective sense of aversiveness that first-order systems lack. Nociception is just one thing that typically produces aversiveness.
Yes I agree, I will think of a short but maximally precise way of rephrasing it. Thank you.
Why does it matter that Gilbert infers something from the behavior of his neural network and not from the behavior of his body? Both are just subjective models of reality. Why does it matter whether he knows something about his pain? Why it doesn’t count, if Gilbert avoids pain defined as the state of neural network that causes him to avoid it, even when he doesn’t know something about it? Maybe you can model it as Gilbert himself not feeling pain, but why the neural network is not a moral patient?
Sorry I think I may have explained this badly. The point is that the neural network has no actual aversiveness in its model of the world. There’s no super meaningful difference here between the neural network and Gilbert that was never my point. The point is that gilbert is only sensitive to certain types of input, but he has no awareness of what the input does to him. Gilbert / the neural network only experiences: something happens to my body → something else happens to my body + i react a certain way, he / the network has no model of / access to why that happens, there is no actual aversiveness in the system at all, only a learnt disposition to react in certain ways in certain contexts.
Its like when a human views a subliminal stimuli, that stimuli creates a disposition to act in certain ways, but the person is not aware of their own sensitivity, and thus there is no experience of it, it is ‘subconscious’ / ‘implicit’. Gilbert / the network is the same way, he is sensitive to pain, but is not aware of the pain in the same sort of way. Does this make sense? Perhaps I will edit the post to include this explanation if that would help.
I understand that, but I’m still asking why subliminal stimuli are not morally relevant for you? They may still create disposition to act in aversive way, so there is still mechanism in some part the brain/neural network that causes this behaviour and has access to the stimulus—what’s the morally significant difference between a stimulus being in some neurons and being in others, such that you call only one location “awareness”?
There is a mechanism in the brain that has access to / represents the physical damage. There is no mechanism in the brain that has access to / represents the aversive response to the physical damage since there is no meta-representation in first-order systems. Thus not a single part of the nervous system at all represents aversiveness, it can be found nowhere in the system.
First, you can still infer meta-representation from your behavior. Second, why does it matter that you represent aversiveness, what’s the difference? Representation of aversiveness and representation of damage are both just some states of neurons that model some other neurons (representation of damage still implies possibility of modeling neurons, not only external state, because your neurons are connected to other neurons).