I think evolutionary theory is the missing element here. For a living being, suffering has a strong, evolved correlation with outcomes that decrease its health, survival, and evolutionary fitness (and avoiding pain helps it avoid this). So things that a moral patient objects to have strong correlations with something that biologically is objective, real, quantifiable, and evolutionarily vital.
However, for an AI, this argument about morality-from-evolution applies to the humans it was trained to simulate the behavior of, but not to the AI — it’s not alive, asking about is evolutionary fitness to it is a category error. It’s a tool, not a living being, implying that its moral parenthood is similar to that of a spider’s web or a beaver’s dam.
I think evolutionary theory is the missing element here. For a living being, suffering has a strong, evolved correlation with outcomes that decrease its health, survival, and evolutionary fitness (and avoiding pain helps it avoid this). So things that a moral patient objects to have strong correlations with something that biologically is objective, real, quantifiable, and evolutionarily vital.
However, for an AI, this argument about morality-from-evolution applies to the humans it was trained to simulate the behavior of, but not to the AI — it’s not alive, asking about is evolutionary fitness to it is a category error. It’s a tool, not a living being, implying that its moral parenthood is similar to that of a spider’s web or a beaver’s dam.