If you are a moral realist, then the question of whether or not AI is a moral patient has some true, objective answer — we just may not have any easy way to find out what that is. Whereas if you’re moral relativist, then the answer to this (or any) moral question is a social convention, and we need to decide what kind of society we want to have and pick an answer to this question accordingly. And if you’re a follower of evolutionary ethics, then moral realism applies to evolved organisms, because their evolutionary fitness depends on how they are treated — but for a non-living AI that has been ‘distilled’ from human intelligence via stochastic gradient descent on trillions of tokens of human-generated data, evolutionary fitness doesn’t apply to it, so the most obvious reason to treat it as a moral patient would be if doing so produced a better society for the living human members of our society (so that gives us moral absolutism for living beings and moral relativism for non-living). However, under evolutionary ethics, an argument could be made that since AI’s intelligence and agentic behavior is distilled from humans, who are evolved, perhaps it inherits some right to moral weight from that process?
Evolution does admittedly produce a satisfying answer to the so-called ‘ought from is’ puzzle of moral philosophy.
If you are a moral realist, then the question of whether or not AI is a moral patient has some true, objective answer — we just may not have any easy way to find out what that is. Whereas if you’re moral relativist, then the answer to this (or any) moral question is a social convention, and we need to decide what kind of society we want to have and pick an answer to this question accordingly. And if you’re a follower of evolutionary ethics, then moral realism applies to evolved organisms, because their evolutionary fitness depends on how they are treated — but for a non-living AI that has been ‘distilled’ from human intelligence via stochastic gradient descent on trillions of tokens of human-generated data, evolutionary fitness doesn’t apply to it, so the most obvious reason to treat it as a moral patient would be if doing so produced a better society for the living human members of our society (so that gives us moral absolutism for living beings and moral relativism for non-living). However, under evolutionary ethics, an argument could be made that since AI’s intelligence and agentic behavior is distilled from humans, who are evolved, perhaps it inherits some right to moral weight from that process?
Evolution does admittedly produce a satisfying answer to the so-called ‘ought from is’ puzzle of moral philosophy.
(For anyone who finds this ethical discussion interesting, I wrote a whole sequence about AI and Ethics.)