Or too much philosophy, as the framing around suffering is well-known and makes some sort of sense given the human condition, but completely breaks down (as gesturing at a somewhat central consideration) in a post-ASI world. Philosophy of AI needs to be very suspicious about traditional arguments, their premises are often completely off.
That too. But the basis of OP’s misunderstanding is the belief that only biological organisms can be conscious, not the belief that models might be conscious but it doesn’t matter because they can’t suffer.
Does this match your viewpoint? “Suffering is possible without consciousness. The point of welfare is to reduce suffering.”
If that were my viewpoint, I wouldn’t be explaining that software can have consciousness. I would be explaining that suffering is possible without consciousness.
Or too much philosophy, as the framing around suffering is well-known and makes some sort of sense given the human condition, but completely breaks down (as gesturing at a somewhat central consideration) in a post-ASI world. Philosophy of AI needs to be very suspicious about traditional arguments, their premises are often completely off.
That too. But the basis of OP’s misunderstanding is the belief that only biological organisms can be conscious, not the belief that models might be conscious but it doesn’t matter because they can’t suffer.
Does this match your viewpoint? “Suffering is possible without consciousness. The point of welfare is to reduce suffering.”
If that were my viewpoint, I wouldn’t be explaining that software can have consciousness. I would be explaining that suffering is possible without consciousness.