Yeah, I think lot of the moral philosophizing about AIssuffers from reliance on human-based priors. While I’m happy some people work on digital minds welfare, and more people should do it, large part of the effort seems to be stuck at the implicit model of unitary mind which can be both a moral agent and moral patient.
Yeah, I think lot of the moral philosophizing about AIs suffers from reliance on human-based priors. While I’m happy some people work on digital minds welfare, and more people should do it, large part of the effort seems to be stuck at the implicit model of unitary mind which can be both a moral agent and moral patient.