Concerning the charge of relativism: it seems clear that Eliezer is a moral relativist in the way that the term is normally understood, but not as he understands it. There may be a legitimate dispute here, but as far as communication goes, we should not be having problems. In deference to common usage, I would reserve right for the moral realism of Roko et al. and use something like h-right for Eliezer’s notion of humanity’s abstracted idealized dynamic—but I don’t think it really matters right now.
Roko writes: “My list is the current human notion of goodness to 5 decimal places. Your list seems a lot more reasonable, but that’s probably because you made it up yourself and you are a lot more reasonable than most humans. Are you claiming that the result of CEV, applied to my list, will be your list?”
This, I think, is the interesting question. Eliezer has been leaning heavily on the psychological unity of humankind, but I don’t think this is enough to carry his argument. The unity of which we speak is a (you will forgive me:) relative term. We can agree that complex functional adaptations are species-typical modulo sex, and that all humans are virtually alike compared to the space of all possible minds, but that doesn’t mean that there’s no room at all for variation in morality in that tiny dot of human minds—variation that cannot be waved away as trivial. Evopsych can only go so far; the SSSM might have been a mistake, but that doesn’t necessarily mean cultural and individual differences don’t matter at all. That would take a separate, stronger argument, at least. (Cf. Virge and myself in comments to “Moral Error and Moral Disagreement.”)
So we are left with a difficult empirical question: to what extent do moral differences amongst humans wash out under CEV, and to what extent are different humans really in different moral reference frames? I fear that there is no way to resolve this issue without a tremendous amount of data. Even if you had all the data you needed, it might be easier just to build the AI!
Concerning the charge of relativism: it seems clear that Eliezer is a moral relativist in the way that the term is normally understood, but not as he understands it. There may be a legitimate dispute here, but as far as communication goes, we should not be having problems. In deference to common usage, I would reserve right for the moral realism of Roko et al. and use something like h-right for Eliezer’s notion of humanity’s abstracted idealized dynamic—but I don’t think it really matters right now.
Roko writes: “My list is the current human notion of goodness to 5 decimal places. Your list seems a lot more reasonable, but that’s probably because you made it up yourself and you are a lot more reasonable than most humans. Are you claiming that the result of CEV, applied to my list, will be your list?”
This, I think, is the interesting question. Eliezer has been leaning heavily on the psychological unity of humankind, but I don’t think this is enough to carry his argument. The unity of which we speak is a (you will forgive me:) relative term. We can agree that complex functional adaptations are species-typical modulo sex, and that all humans are virtually alike compared to the space of all possible minds, but that doesn’t mean that there’s no room at all for variation in morality in that tiny dot of human minds—variation that cannot be waved away as trivial. Evopsych can only go so far; the SSSM might have been a mistake, but that doesn’t necessarily mean cultural and individual differences don’t matter at all. That would take a separate, stronger argument, at least. (Cf. Virge and myself in comments to “Moral Error and Moral Disagreement.”)
So we are left with a difficult empirical question: to what extent do moral differences amongst humans wash out under CEV, and to what extent are different humans really in different moral reference frames? I fear that there is no way to resolve this issue without a tremendous amount of data. Even if you had all the data you needed, it might be easier just to build the AI!