Wow, I had forgotten about that non-person predicates post. I definitely never thought it would have any bearing on a decision I personally would have to make. I was wrong.
Really? I was under the impression that there was a strong consensus, at least here on LW, that a sufficiently accurate simulation of consciousness is the moral equivalent of consciousness.
“Sufficiently accurate simulation of consciousness” is a subset of set of things that are artificial minds. You might have a consensus for that class. I don’t think you have an understanding that all minds have the same moral value. Even all minds with a certain level of intelligence.
That’s my understanding as well.… though I would say, rather, that being artificial is not a particularly important attribute towards evaluating the moral status of a consciousness. IOW, an artificial consciousness is a consciousness, and the same moral considerations apply to it as other consciousnesses with the same properties. That said, I also think this whole “a tulpa {is,isn’t} an artificial intelligence” discussion is an excellent example of losing track of referents in favor of manipulating symbols, so I don’t think it matters much in context.
This might be a stupid question, but what ethical considerations are different for an “artificial” mind?
When talking about AGI few people label it as murder to shut down the AI that’s in the box. At least it’s worth a discussion whether it is.
Only if it’s not sapient, which is a non-trivial question.
Wow, I had forgotten about that non-person predicates post. I definitely never thought it would have any bearing on a decision I personally would have to make. I was wrong.
Really? I was under the impression that there was a strong consensus, at least here on LW, that a sufficiently accurate simulation of consciousness is the moral equivalent of consciousness.
“Sufficiently accurate simulation of consciousness” is a subset of set of things that are artificial minds. You might have a consensus for that class. I don’t think you have an understanding that all minds have the same moral value. Even all minds with a certain level of intelligence.
At least for me, personally, the relevant property for moral status is whether it has consciousness.
That’s my understanding as well.… though I would say, rather, that being artificial is not a particularly important attribute towards evaluating the moral status of a consciousness. IOW, an artificial consciousness is a consciousness, and the same moral considerations apply to it as other consciousnesses with the same properties. That said, I also think this whole “a tulpa {is,isn’t} an artificial intelligence” discussion is an excellent example of losing track of referents in favor of manipulating symbols, so I don’t think it matters much in context.