These issues matter not just for human altruism but also for AI value systems. If an AI takeover occurs and if the AI(s) care about the welfare of other beings at all, they will have to make judgements about which entities even have a well-being to care about, and they will also have to make judgements about how to aggregate all these individual welfares (for the purpose of decision-making). Even just from a self-interested perspective, moral relativism is not enough here, because in the event of AI takeover, you the human individual will be on the receiving end of AI decisions. It would be good to have a proposal for AI value system that is both safe for you the individual, and also appealing enough to people in general, that it has a chance of actually being implemented.
Meanwhile, the CEV philosophy tilts towards moral objectivism. It is supposed that the human brain implicitly follows some decision procedure specific to our species, that this encompasses what we call moral decisions, and that the true moral ideal of humanity would be found by applying this decision procedure to itself (“our wish if we knew more, thought faster, were more the people we wished we were”, etc). It is not beyond imagining that if you took a brain-based value system like PRISM (LW discussion), and “renormalized” it according to a CEV procedure, that it would output a definite standard for comparison and aggregation of different welfares.
This would be a good reason not to let AIs take over!
On a more serious note—I think trying to give AI systems some sort of objective (not from a human perspective) moral framework is impossible to get right and likely to end badly for human values.
It’s more worth it to focus on giving AI systems a human-subjective framework. I buy that human values are good & should be preserved.
These issues matter not just for human altruism but also for AI value systems. If an AI takeover occurs and if the AI(s) care about the welfare of other beings at all, they will have to make judgements about which entities even have a well-being to care about, and they will also have to make judgements about how to aggregate all these individual welfares (for the purpose of decision-making). Even just from a self-interested perspective, moral relativism is not enough here, because in the event of AI takeover, you the human individual will be on the receiving end of AI decisions. It would be good to have a proposal for AI value system that is both safe for you the individual, and also appealing enough to people in general, that it has a chance of actually being implemented.
Meanwhile, the CEV philosophy tilts towards moral objectivism. It is supposed that the human brain implicitly follows some decision procedure specific to our species, that this encompasses what we call moral decisions, and that the true moral ideal of humanity would be found by applying this decision procedure to itself (“our wish if we knew more, thought faster, were more the people we wished we were”, etc). It is not beyond imagining that if you took a brain-based value system like PRISM (LW discussion), and “renormalized” it according to a CEV procedure, that it would output a definite standard for comparison and aggregation of different welfares.
This would be a good reason not to let AIs take over!
On a more serious note—I think trying to give AI systems some sort of objective (not from a human perspective) moral framework is impossible to get right and likely to end badly for human values.
It’s more worth it to focus on giving AI systems a human-subjective framework. I buy that human values are good & should be preserved.