Ah I wasn’t really referring to the OP, more to people who in general might blindly equate vague notions of whatever consciousness might mean to moral value. I think that’s an oversimplification and possibly dangerous. Combined with symmetric population ethics, a result could be that we’d need to push for spamming the universe with maximum happy AIs, and even replacing humanity with maximum happy AIs since they’d contain more happiness per kg or m3. I think that would be madness.
Animals: yes, some. Future AIs: possibly.
If I’d have to speculate, I’d guess that self-awareness is just included in any good world model, and sentience is a control feedback loop, in both humans and AIs. These two things together, perhaps in something like a global workspace, might make up what some people call consciousness. These things are obviously useful to steer machines into a designed direction. But I fear they will turn out to be trivial engineering results: one could argue an automatic vacuum cleaner has feeling, since it has a feedback loop steering it clear of a wall. That doesn’t mean it should have rights.
I think the morality question is a difficult one, will remain subjective, and we should vote on it, rather than try to solve it analytically. I think the latter is doomed.
Ah I wasn’t really referring to the OP, more to people who in general might blindly equate vague notions of whatever consciousness might mean to moral value. I think that’s an oversimplification and possibly dangerous. Combined with symmetric population ethics, a result could be that we’d need to push for spamming the universe with maximum happy AIs, and even replacing humanity with maximum happy AIs since they’d contain more happiness per kg or m3. I think that would be madness.
Animals: yes, some. Future AIs: possibly.
If I’d have to speculate, I’d guess that self-awareness is just included in any good world model, and sentience is a control feedback loop, in both humans and AIs. These two things together, perhaps in something like a global workspace, might make up what some people call consciousness. These things are obviously useful to steer machines into a designed direction. But I fear they will turn out to be trivial engineering results: one could argue an automatic vacuum cleaner has feeling, since it has a feedback loop steering it clear of a wall. That doesn’t mean it should have rights.
I think the morality question is a difficult one, will remain subjective, and we should vote on it, rather than try to solve it analytically. I think the latter is doomed.