Well, that’s not true for everyone here, I suspect.
Eliezer, for example, does seem very concerned with whether the optimization process that gets constructed (or, at least, the process he constructs) has some attribute that is variously labelled by various people as “is sentient,” “has consciousness,” “has qualia,” “is a real person,” etc.
Presumably he’d be delighted if someone proved that a simulation of a human created by an AI can’t possibly be a real person because it lacks some key component that mere simulations cannot have. He just doesn’t think it’s true. (Nor do I.)
Well, that’s not true for everyone here, I suspect.
Eliezer, for example, does seem very concerned with whether the optimization process that gets constructed (or, at least, the process he constructs) has some attribute that is variously labelled by various people as “is sentient,” “has consciousness,” “has qualia,” “is a real person,” etc.
Presumably he’d be delighted if someone proved that a simulation of a human created by an AI can’t possibly be a real person because it lacks some key component that mere simulations cannot have. He just doesn’t think it’s true. (Nor do I.)