It’s not the fact that we’re implemented in a biological body that gives us the ability to suffer (or, generally, the ability to have subjective experience), but the specific cognitive structure of our mind.
It makes sense that animals evolved the capacity to experience pain and suffering, because we have bodies that can be injured, starved, sickened, and so on. There are stimuli that correctly identify threats to our well-being; and so we have developed to perceive those stimuli as noxious and well-worth-avoiding. But this suggests that a mind that developed without such threats would not need the capacity to suffer, just as a fish that lives in a pitch-black cave does not need the capacity to see.
Or too much philosophy, as the framing around suffering is well-known and makes some sort of sense given the human condition, but completely breaks down (as gesturing at a somewhat central consideration) in a post-ASI world. Philosophy of AI needs to be very suspicious about traditional arguments, their premises are often completely off.
That too. But the basis of OP’s misunderstanding is the belief that only biological organisms can be conscious, not the belief that models might be conscious but it doesn’t matter because they can’t suffer.
Does this match your viewpoint? “Suffering is possible without consciousness. The point of welfare is to reduce suffering.”
If that were my viewpoint, I wouldn’t be explaining that software can have consciousness. I would be explaining that suffering is possible without consciousness.
You don’t know enough philosophy.
The human mind is a substrate-independent computer program. If it was implemented in a non-biological substrate, it would keep its subjective experience.
It’s not the fact that we’re implemented in a biological body that gives us the ability to suffer (or, generally, the ability to have subjective experience), but the specific cognitive structure of our mind.
It makes sense that animals evolved the capacity to experience pain and suffering, because we have bodies that can be injured, starved, sickened, and so on. There are stimuli that correctly identify threats to our well-being; and so we have developed to perceive those stimuli as noxious and well-worth-avoiding. But this suggests that a mind that developed without such threats would not need the capacity to suffer, just as a fish that lives in a pitch-black cave does not need the capacity to see.
Or too much philosophy, as the framing around suffering is well-known and makes some sort of sense given the human condition, but completely breaks down (as gesturing at a somewhat central consideration) in a post-ASI world. Philosophy of AI needs to be very suspicious about traditional arguments, their premises are often completely off.
That too. But the basis of OP’s misunderstanding is the belief that only biological organisms can be conscious, not the belief that models might be conscious but it doesn’t matter because they can’t suffer.
Does this match your viewpoint? “Suffering is possible without consciousness. The point of welfare is to reduce suffering.”
If that were my viewpoint, I wouldn’t be explaining that software can have consciousness. I would be explaining that suffering is possible without consciousness.