I feel like one better way to think about this topic rather than just going to the conclusion that there is no objective way to compare individuals is to continue full-tilt into the evolutionary argument about keeping track of fitness-relevant information, taking it to the point that one’s utility function literally becomes fitness.[1][2]
Unlike the unitarian approach, this does seem fairly consistent with a surprising number of human values, given enough reflection on it. For instance, it does value not unduly causing massive amounts of suffering to bees; assuming that such suffering directly affects their ability to perform their functions in ecosystems and the economy, us humans would likely be negatively impacted to some extent. It also seems to endorse cooperation and non-discrimination, as fitness would be negatively impacted by not taking full advantage of specialization and by allowing for others to locally increase their own fitness by throwing our own under the bus.
It also has a fairly nice argument for why we should expect people to have a utility function that looks like this. Any individual with values pointing away from fitness would simply be selected away from the population, naturally selecting for this trait.[3] By this point in human evolution, we should expect most people to at least endorse the outcomes of a decision theory based on this utility function (even if they perhaps wouldn’t trust it directly).
Of course, this theory is inherently morally relativist, but I think that given the current environment we live in, this doesn’t pose a problem to humans trying to use this. One would have to be careful and methodical enough to consider higher-order consequences, but at least it seems to have a clearer prompt for how one should actually approach problems.
There are some minor issues with this formulation, such as this not directly handing preferences humans have like transhumanism. I think an even more ideal utility function would be something like “the existence of the property that, by its nature, is the easiest to optimize,” although I’m not sure of it, given how quickly that descends into fundamental philosophical questions.
Also, if any of you know if there’s a more specific name for this version of moral relativism, I would be happy to know! I’ve been trying to look for it (since it seems rather simple to construct), but I haven’t found anything.
Of course, it wouldn’t be exact, owing to reliance on the ancestral environment, the computational and informational difficulty of determining fitness, and the unfortunately slow pace of evolution, but it should still be good enough as an approximation for large swaths of System 1 thinking.
I feel like one better way to think about this topic rather than just going to the conclusion that there is no objective way to compare individuals is to continue full-tilt into the evolutionary argument about keeping track of fitness-relevant information, taking it to the point that one’s utility function literally becomes fitness.[1][2]
Unlike the unitarian approach, this does seem fairly consistent with a surprising number of human values, given enough reflection on it. For instance, it does value not unduly causing massive amounts of suffering to bees; assuming that such suffering directly affects their ability to perform their functions in ecosystems and the economy, us humans would likely be negatively impacted to some extent. It also seems to endorse cooperation and non-discrimination, as fitness would be negatively impacted by not taking full advantage of specialization and by allowing for others to locally increase their own fitness by throwing our own under the bus.
It also has a fairly nice argument for why we should expect people to have a utility function that looks like this. Any individual with values pointing away from fitness would simply be selected away from the population, naturally selecting for this trait.[3] By this point in human evolution, we should expect most people to at least endorse the outcomes of a decision theory based on this utility function (even if they perhaps wouldn’t trust it directly).
Of course, this theory is inherently morally relativist, but I think that given the current environment we live in, this doesn’t pose a problem to humans trying to use this. One would have to be careful and methodical enough to consider higher-order consequences, but at least it seems to have a clearer prompt for how one should actually approach problems.
There are some minor issues with this formulation, such as this not directly handing preferences humans have like transhumanism. I think an even more ideal utility function would be something like “the existence of the property that, by its nature, is the easiest to optimize,” although I’m not sure of it, given how quickly that descends into fundamental philosophical questions.
Also, if any of you know if there’s a more specific name for this version of moral relativism, I would be happy to know! I’ve been trying to look for it (since it seems rather simple to construct), but I haven’t found anything.
Of course, it wouldn’t be exact, owing to reliance on the ancestral environment, the computational and informational difficulty of determining fitness, and the unfortunately slow pace of evolution, but it should still be good enough as an approximation for large swaths of System 1 thinking.