On some level, yes it is impossible to critique another person’s values as objectively wrong, utility functions in general are not up for grabs.
If person A values bees at zero, and person B values them at equivalent to humans, then person B might well call person A evil, but that in and of itself is a subjective (and let’s be honest, social) judgement aimed at person A. When I call people evil, I’m attempting to apply certain internal and social labels onto them in order to help myself and others navigate interactions with them, as well as create better decision theory incentives for people in general.
(Example: calling a businessman who rips off his clients evil, in order to remind oneself and others not to make deals with him, and incentivize him to do that less. Example: calling a meat-eater evil, to remind oneself and others that this person is liable to harm others when social norms permit it, and incentivize her to stop eating meat.)
However, I think lots of people are amenable to arguments that one’s utility function should be more consistent (and therefore lower complexity). This is basically the basis of fairness and empathy as a concept (this is why shrimp welfare campaigners often list a bunch of human-like shrimp behaviours in their campaigns: in order to imply that shrimp are similar to us, and therefore we should care about them).
If someone does agree with this, I can critique their utility function on grounds of them being more or less consistent. For example, if we imagine looking various mind-states of humans and clustering them somehow, we would see the red-haired-mind-states mixed in with everyone else. Separating them out would be a high-complexity operation.
If we added a bunch of bee mind-states, they would form a separate cluster. Giving some comparison factor is be a low-complexity operation, you basically have to choose a real number and then roll with it.
If there really was a natural way to compare wildly different mental states, which was roughly in line with thinking about my own experiences of the world, then that would be great. But the RP report doesn’t supply that.
On some level, yes it is impossible to critique another person’s values as objectively wrong, utility functions in general are not up for grabs.
If person A values bees at zero, and person B values them at equivalent to humans, then person B might well call person A evil, but that in and of itself is a subjective (and let’s be honest, social) judgement aimed at person A. When I call people evil, I’m attempting to apply certain internal and social labels onto them in order to help myself and others navigate interactions with them, as well as create better decision theory incentives for people in general.
(Example: calling a businessman who rips off his clients evil, in order to remind oneself and others not to make deals with him, and incentivize him to do that less.
Example: calling a meat-eater evil, to remind oneself and others that this person is liable to harm others when social norms permit it, and incentivize her to stop eating meat.)
However, I think lots of people are amenable to arguments that one’s utility function should be more consistent (and therefore lower complexity). This is basically the basis of fairness and empathy as a concept (this is why shrimp welfare campaigners often list a bunch of human-like shrimp behaviours in their campaigns: in order to imply that shrimp are similar to us, and therefore we should care about them).
If someone does agree with this, I can critique their utility function on grounds of them being more or less consistent. For example, if we imagine looking various mind-states of humans and clustering them somehow, we would see the red-haired-mind-states mixed in with everyone else. Separating them out would be a high-complexity operation.
If we added a bunch of bee mind-states, they would form a separate cluster. Giving some comparison factor is be a low-complexity operation, you basically have to choose a real number and then roll with it.
If there really was a natural way to compare wildly different mental states, which was roughly in line with thinking about my own experiences of the world, then that would be great. But the RP report doesn’t supply that.