If you believe there is no objective way to compare valence between individuals, then I don’t see how you can claim that it’s wrong to discount the welfare of red-haired people.
You can call that evil according to your own values, but then someone else can just as easily say that ignoring bee welfare is evil.
I guess you could say “Ignoring red-haired people is evil and ignoring bees isn’t evil, because those are my values”, but I don’t know how you can expect to convince anyone else to agree with your values.
You can call that evil according to your own values, but then someone else can just as easily say that ignoring bee welfare is evil.
If you mean evil according to their values, then sure this just seems correct. If someone doesn’t hold to objective morality then the same is true for every moral question. Some are just less controversial than others. And you CAN make arguments like, if you agree with me on moral premise X then conclusion Y holds
On some level, yes it is impossible to critique another person’s values as objectively wrong, utility functions in general are not up for grabs.
If person A values bees at zero, and person B values them at equivalent to humans, then person B might well call person A evil, but that in and of itself is a subjective (and let’s be honest, social) judgement aimed at person A. When I call people evil, I’m attempting to apply certain internal and social labels onto them in order to help myself and others navigate interactions with them, as well as create better decision theory incentives for people in general.
(Example: calling a businessman who rips off his clients evil, in order to remind oneself and others not to make deals with him, and incentivize him to do that less. Example: calling a meat-eater evil, to remind oneself and others that this person is liable to harm others when social norms permit it, and incentivize her to stop eating meat.)
However, I think lots of people are amenable to arguments that one’s utility function should be more consistent (and therefore lower complexity). This is basically the basis of fairness and empathy as a concept (this is why shrimp welfare campaigners often list a bunch of human-like shrimp behaviours in their campaigns: in order to imply that shrimp are similar to us, and therefore we should care about them).
If someone does agree with this, I can critique their utility function on grounds of them being more or less consistent. For example, if we imagine looking various mind-states of humans and clustering them somehow, we would see the red-haired-mind-states mixed in with everyone else. Separating them out would be a high-complexity operation.
If we added a bunch of bee mind-states, they would form a separate cluster. Giving some comparison factor is be a low-complexity operation, you basically have to choose a real number and then roll with it.
If there really was a natural way to compare wildly different mental states, which was roughly in line with thinking about my own experiences of the world, then that would be great. But the RP report doesn’t supply that.
The situation is more complex and less bad than you are making it out to be. For instance the word “qualia” is an attempt to clarify te word “consciousness”, and does have a stipulated meaning, for alm that some people ignore it.. The contention about words like “qualia” and “valence” is about whether and how they are real, and that is not a semantic issue. Rationalists have a long term problem of trying to find objective valance in a physical universe, even though Hume’s fork tells you it’s not possible
I think Eliezer’s attempt at moral realism derives from two things: first, the idea that there is a unique morality which objectively arises from the consistent rational completion of universal human ideals; second, the idea that there are no other intelligent agents around with a morality drive, that could have a different completion. Other possible agents may have their own drives or imperatives, but those should not be regarded as “moralities”—that’s the import of the second idea.
This is all strictly phrased in computational terms too, whereas I would say that morality also has a phenomenological dimension, which might serve to further distinguish it from other possible drives or dispositions. It would be interesting to see CEV metaethics developed in that direction, but that would require a specific theory of how consciousness relates to computation, and especially how the morally salient aspects of consciousness relate to moral cognition and decision-making.
Other possible agents may have their own drives or imperatives, but those should not be regarded as “moralities”—that’s the import of the second idea.
He seems to believe that, but I dont see why anyone else should. Its like saying English is the only language, or the Earth is the only planet. If morality is having values, any number of entities could have values. If it’s rules for living in groups, ditto. If it’s fairness, ditto.
This is all strictly phrased in computational terms too
It’s not strictly phrased at all..It’s very hard to follow what he’s saying...or particularly computational.
If you believe there is no objective way to compare valence between individuals, then I don’t see how you can claim that it’s wrong to discount the welfare of red-haired people.
This feels too strong of a claim to me. There are still non-objective ways to compare valence between individuals—J Bostock mentions “anchor(ing) on neuron count”.
I guess you could say “Ignoring red-haired people is evil and ignoring bees isn’t evil, because those are my values”, but I don’t know how you can expect to convince anyone else to agree with your values.
I might not strongly agree, but I believe in this direction. I think that humans are generally pretty important and I like human values.
There’s always going to be some subjectivity: I think this is good.
If you believe there is no objective way to compare valence between individuals, then I don’t see how you can claim that it’s wrong to discount the welfare of red-haired people.
You can call that evil according to your own values, but then someone else can just as easily say that ignoring bee welfare is evil.
I guess you could say “Ignoring red-haired people is evil and ignoring bees isn’t evil, because those are my values”, but I don’t know how you can expect to convince anyone else to agree with your values.
If you mean evil according to their values, then sure this just seems correct. If someone doesn’t hold to objective morality then the same is true for every moral question. Some are just less controversial than others. And you CAN make arguments like, if you agree with me on moral premise X then conclusion Y holds
On some level, yes it is impossible to critique another person’s values as objectively wrong, utility functions in general are not up for grabs.
If person A values bees at zero, and person B values them at equivalent to humans, then person B might well call person A evil, but that in and of itself is a subjective (and let’s be honest, social) judgement aimed at person A. When I call people evil, I’m attempting to apply certain internal and social labels onto them in order to help myself and others navigate interactions with them, as well as create better decision theory incentives for people in general.
(Example: calling a businessman who rips off his clients evil, in order to remind oneself and others not to make deals with him, and incentivize him to do that less.
Example: calling a meat-eater evil, to remind oneself and others that this person is liable to harm others when social norms permit it, and incentivize her to stop eating meat.)
However, I think lots of people are amenable to arguments that one’s utility function should be more consistent (and therefore lower complexity). This is basically the basis of fairness and empathy as a concept (this is why shrimp welfare campaigners often list a bunch of human-like shrimp behaviours in their campaigns: in order to imply that shrimp are similar to us, and therefore we should care about them).
If someone does agree with this, I can critique their utility function on grounds of them being more or less consistent. For example, if we imagine looking various mind-states of humans and clustering them somehow, we would see the red-haired-mind-states mixed in with everyone else. Separating them out would be a high-complexity operation.
If we added a bunch of bee mind-states, they would form a separate cluster. Giving some comparison factor is be a low-complexity operation, you basically have to choose a real number and then roll with it.
If there really was a natural way to compare wildly different mental states, which was roughly in line with thinking about my own experiences of the world, then that would be great. But the RP report doesn’t supply that.
You’re not ultimately limited to utilitarianism: you can use Kantian or Rawlsian arguments to include redheads.
@Joseph Miller
The situation is more complex and less bad than you are making it out to be. For instance the word “qualia” is an attempt to clarify te word “consciousness”, and does have a stipulated meaning, for alm that some people ignore it.. The contention about words like “qualia” and “valence” is about whether and how they are real, and that is not a semantic issue. Rationalists have a long term problem of trying to find objective valance in a physical universe, even though Hume’s fork tells you it’s not possible
@Rafael Harth
@Seth Herd
If the success of a moral theory ultimately grounds out in intuition, it’s OK to use unintuitiveness to summarily reject a theory.
@Mitchell_Porter
CEV is group level relativism, not objectivism.
I think Eliezer’s attempt at moral realism derives from two things: first, the idea that there is a unique morality which objectively arises from the consistent rational completion of universal human ideals; second, the idea that there are no other intelligent agents around with a morality drive, that could have a different completion. Other possible agents may have their own drives or imperatives, but those should not be regarded as “moralities”—that’s the import of the second idea.
This is all strictly phrased in computational terms too, whereas I would say that morality also has a phenomenological dimension, which might serve to further distinguish it from other possible drives or dispositions. It would be interesting to see CEV metaethics developed in that direction, but that would require a specific theory of how consciousness relates to computation, and especially how the morally salient aspects of consciousness relate to moral cognition and decision-making.
He seems to believe that, but I dont see why anyone else should. Its like saying English is the only language, or the Earth is the only planet. If morality is having values, any number of entities could have values. If it’s rules for living in groups, ditto. If it’s fairness, ditto.
It’s not strictly phrased at all..It’s very hard to follow what he’s saying...or particularly computational.
I agree that unintuitiveness is a valid reason to reject the theory and the report; that doesn’t contradict my comment.
This feels too strong of a claim to me. There are still non-objective ways to compare valence between individuals—J Bostock mentions “anchor(ing) on neuron count”.
I might not strongly agree, but I believe in this direction. I think that humans are generally pretty important and I like human values.
There’s always going to be some subjectivity: I think this is good.