As someone who often passionately rants against the AI successionist line of thinking, the most common objection I hear is “why is your definition of value so arbitrary as to stipulate that biological meat-humans are necessary.” This is missing the crux—I agree such a definition of moral value would be hard to justify.
I don’t agree. I follow the philosophical school of evolutionary ethics, which derives ethical value from evolutionary theory about the behavior of social organisms (which will tend to evolve things like a sense of fairness). This gives a clear, non-religious solution to the ought-from-is problem otherwise inherent to moral realism: suffering is bad because it’s a close proxy for things that decreases the well-being and evolutionary fitness of a living being that is subject to evolutionary forces. And in my view as a human, it’s particularly bad when it applies to other humans — that’s the way my moral sense was evolved, to apply to other members of the same coalition-of-tribes (rationally I’m willing to expand this to at least the entire human species — if we ever meet an alien sapient species I’d need to assess whether they’re utility monsters).
As such, it’s pretty easy to derive from evolutionary ethics a corollary that if something is non-living and not evolved, so not subject to evolutionary fitness, then it is not a moral patient (unless, for some reason, it would benefit humans for us to treat it as one, as we do for pets). In that moral framework, no current AI is a moral patient, and the utilitarian value of a world containing only AI and no living beings is zero.
Note that under this moral framework the moral status of an uploaded human is a much more complicated question: they’re a product of evolution, even though they’re no longer subject to it.
I don’t agree. I follow the philosophical school of evolutionary ethics, which derives ethical value from evolutionary theory about the behavior of social organisms (which will tend to evolve things like a sense of fairness). This gives a clear, non-religious solution to the ought-from-is problem otherwise inherent to moral realism: suffering is bad because it’s a close proxy for things that decreases the well-being and evolutionary fitness of a living being that is subject to evolutionary forces. And in my view as a human, it’s particularly bad when it applies to other humans — that’s the way my moral sense was evolved, to apply to other members of the same coalition-of-tribes (rationally I’m willing to expand this to at least the entire human species — if we ever meet an alien sapient species I’d need to assess whether they’re utility monsters).
As such, it’s pretty easy to derive from evolutionary ethics a corollary that if something is non-living and not evolved, so not subject to evolutionary fitness, then it is not a moral patient (unless, for some reason, it would benefit humans for us to treat it as one, as we do for pets). In that moral framework, no current AI is a moral patient, and the utilitarian value of a world containing only AI and no living beings is zero.
Note that under this moral framework the moral status of an uploaded human is a much more complicated question: they’re a product of evolution, even though they’re no longer subject to it.