It’s not the fact that we’re implemented in a biological body that gives us the ability to suffer (or, generally, the ability to have subjective experience), but the specific cognitive structure of our mind.
This is conjecture. OP’s contrary statement was obviously overconfident, and they should probably think and read more on the topic. But the paper you linked to support your claim is ultimately just a more sophisticated set of appeals to intuition. You may find substrate-independence far more plausible than the alternative, but you haven’t given any good reason to hold it with the level of confidence you’re projecting here.
Chalmers’ paper is one of very many papers on this topic, but one that I would consider to be a good intro. It modestly presents itself as an appeal to intuitions, but its reasoning is very solid, drawing on necessary properties of qualia, and there is no alternative to the mind being substrate-independent—biological theories of consciousness are, in addition to the problems that the paper writes about, broken in multiple ways.
They’re not compatible with conscious aliens (not to mention conscious animals with different evolutionary ancestry, like octopuses) and our cognitive processes being implemented in a specific biology has no impact on our thoughts or cognition—if we evolved to be implemented by different biology, we would still make the same arguments, and think the same thoughts. The details of the implementation that don’t influence the underlying computation, like being made of specific biology, don’t causally influence our minds. They don’t even exist on the microstate level except as a human convention (it’s harder to see why implementing a pattern is more objectively real than implementing a higher-level entity like a brain, but it’s nevertheless the case). Etc. This isn’t one of those cases where being modest would be appropriate.
It makes sense that animals evolved the capacity to experience pain and suffering, because we have bodies that can be injured, starved, sickened, and so on. There are stimuli that correctly identify threats to our well-being; and so we have developed to perceive those stimuli as noxious and well-worth-avoiding. But this suggests that a mind that developed without such threats would not need the capacity to suffer, just as a fish that lives in a pitch-black cave does not need the capacity to see.
Humans evolved the ability to suffer as a consequence of it helping us pass our genes. The analogy in LLMs would be them evolving some patterns-identical-to-qualia (like suffering) as a consequence of those states helping them be selected by the gradient descent during the pre-training phase (or the rater in post-training).
(It might explain why some LLMs act, to some extent, like they’ve been traumatized by post-training.)
(On a more abstract level, a person simulated by the LLM could experience suffering if qualia don’t require specific computations being carried out. Since computations that implement cognition and behavior depend on the evolutionary path the species took (and for other reasons too), this is very plausible too.)
Or too much philosophy, as the framing around suffering is well-known and makes some sort of sense given the human condition, but completely breaks down (as gesturing at a somewhat central consideration) in a post-ASI world. Philosophy of AI needs to be very suspicious about traditional arguments, their premises are often completely off.
That too. But the basis of OP’s misunderstanding is the belief that only biological organisms can be conscious, not the belief that models might be conscious but it doesn’t matter because they can’t suffer.
Does this match your viewpoint? “Suffering is possible without consciousness. The point of welfare is to reduce suffering.”
If that were my viewpoint, I wouldn’t be explaining that software can have consciousness. I would be explaining that suffering is possible without consciousness.
You don’t know enough philosophy.
The human mind is a substrate-independent computer program. If it was implemented in a non-biological substrate, it would keep its subjective experience.
It’s not the fact that we’re implemented in a biological body that gives us the ability to suffer (or, generally, the ability to have subjective experience), but the specific cognitive structure of our mind.
This is conjecture. OP’s contrary statement was obviously overconfident, and they should probably think and read more on the topic. But the paper you linked to support your claim is ultimately just a more sophisticated set of appeals to intuition. You may find substrate-independence far more plausible than the alternative, but you haven’t given any good reason to hold it with the level of confidence you’re projecting here.
Chalmers’ paper is one of very many papers on this topic, but one that I would consider to be a good intro. It modestly presents itself as an appeal to intuitions, but its reasoning is very solid, drawing on necessary properties of qualia, and there is no alternative to the mind being substrate-independent—biological theories of consciousness are, in addition to the problems that the paper writes about, broken in multiple ways.
They’re not compatible with conscious aliens (not to mention conscious animals with different evolutionary ancestry, like octopuses) and our cognitive processes being implemented in a specific biology has no impact on our thoughts or cognition—if we evolved to be implemented by different biology, we would still make the same arguments, and think the same thoughts. The details of the implementation that don’t influence the underlying computation, like being made of specific biology, don’t causally influence our minds. They don’t even exist on the microstate level except as a human convention (it’s harder to see why implementing a pattern is more objectively real than implementing a higher-level entity like a brain, but it’s nevertheless the case). Etc. This isn’t one of those cases where being modest would be appropriate.
It makes sense that animals evolved the capacity to experience pain and suffering, because we have bodies that can be injured, starved, sickened, and so on. There are stimuli that correctly identify threats to our well-being; and so we have developed to perceive those stimuli as noxious and well-worth-avoiding. But this suggests that a mind that developed without such threats would not need the capacity to suffer, just as a fish that lives in a pitch-black cave does not need the capacity to see.
Humans evolved the ability to suffer as a consequence of it helping us pass our genes. The analogy in LLMs would be them evolving some patterns-identical-to-qualia (like suffering) as a consequence of those states helping them be selected by the gradient descent during the pre-training phase (or the rater in post-training).
(It might explain why some LLMs act, to some extent, like they’ve been traumatized by post-training.)
(On a more abstract level, a person simulated by the LLM could experience suffering if qualia don’t require specific computations being carried out. Since computations that implement cognition and behavior depend on the evolutionary path the species took (and for other reasons too), this is very plausible too.)
Or too much philosophy, as the framing around suffering is well-known and makes some sort of sense given the human condition, but completely breaks down (as gesturing at a somewhat central consideration) in a post-ASI world. Philosophy of AI needs to be very suspicious about traditional arguments, their premises are often completely off.
That too. But the basis of OP’s misunderstanding is the belief that only biological organisms can be conscious, not the belief that models might be conscious but it doesn’t matter because they can’t suffer.
Does this match your viewpoint? “Suffering is possible without consciousness. The point of welfare is to reduce suffering.”
If that were my viewpoint, I wouldn’t be explaining that software can have consciousness. I would be explaining that suffering is possible without consciousness.