But I don’t think passing the Turing test matters as much as embodiment.
Psychologically, that’s what matters to many humans, I think, yeah.
Arguably AIs already pass the Turing test via text interface
They do.
but almost no one thinks it means they are conscious or moral patients
The question is how much of that is subconsciously absorbing the statements of companies who sell AI services or absorbing other people’s beliefs, or not seeing AIs as “real” (as if the human mind were anything else than a pattern), and how much of that is consciously reflecting on technical aspects of consciousness.
My best guess is that people’s reasoning about this both starts and terminates on automatically evaluating software as “not real” and “not really self-aware.”
Making AIs embodied might help for that reason—just like making human mind uploads embodied would help people accept them as self-aware.
Evolutionarily, what computations implement our cognition and behavior is an accident, and there is no rational reason to insist on the internal computations of AIs being the same before we accept them as true moral patients.
I am not at all sure that there is a rational derivation of which entities should or should not be considered as moral patients by humans. So I’m not sure that demanding embodiment for moral patienthood is an error.
Generally I’m comfortable leaving these decisions of moral patienthood of entities that don’t yet exist to the sensibilities of future people.
So I’m not sure that demanding embodiment for moral patienthood is an error.
So, to give a specific example, if you considered a mind upload with conscious states identical to yours (but not embodied), it’s possible it would be morally permissible for (embodied) humans to torture it for fun?
Not a big believer in hypotheticals. Mind uploading gets into some very weird issues. I will leave it to decide to future society when and if that happens. I would say that if people are torturing anything for fun, even if that thing has no capacity for pain, then that doesn’t sound morally good to me.
Psychologically, that’s what matters to many humans, I think, yeah.
They do.
The question is how much of that is subconsciously absorbing the statements of companies who sell AI services or absorbing other people’s beliefs, or not seeing AIs as “real” (as if the human mind were anything else than a pattern), and how much of that is consciously reflecting on technical aspects of consciousness.
My best guess is that people’s reasoning about this both starts and terminates on automatically evaluating software as “not real” and “not really self-aware.”
Making AIs embodied might help for that reason—just like making human mind uploads embodied would help people accept them as self-aware.
Evolutionarily, what computations implement our cognition and behavior is an accident, and there is no rational reason to insist on the internal computations of AIs being the same before we accept them as true moral patients.
I am not at all sure that there is a rational derivation of which entities should or should not be considered as moral patients by humans. So I’m not sure that demanding embodiment for moral patienthood is an error.
Generally I’m comfortable leaving these decisions of moral patienthood of entities that don’t yet exist to the sensibilities of future people.
So, to give a specific example, if you considered a mind upload with conscious states identical to yours (but not embodied), it’s possible it would be morally permissible for (embodied) humans to torture it for fun?
Not a big believer in hypotheticals. Mind uploading gets into some very weird issues. I will leave it to decide to future society when and if that happens. I would say that if people are torturing anything for fun, even if that thing has no capacity for pain, then that doesn’t sound morally good to me.