This essay defines and clearly explains an important property of human moral intuitions: the divergence of possible extrapolations from the part of the state spaces we’re used to think about. This property is a challenge in moral philosophy, that has implications on AI alignment and long-term or “extreme” thinking in effective altruism. Although I don’t think that it was especially novel to me personally, it is valuable to have a solid reference for explaining this concept.
This essay defines and clearly explains an important property of human moral intuitions: the divergence of possible extrapolations from the part of the state spaces we’re used to think about. This property is a challenge in moral philosophy, that has implications on AI alignment and long-term or “extreme” thinking in effective altruism. Although I don’t think that it was especially novel to me personally, it is valuable to have a solid reference for explaining this concept.