So, can someone summarize why EY thinks that p-morality is inferior (not just h-inferior) to h-morality, which he seems to call a one-place function “morality”? The OP and the following discussion did not make it clear for me.
He doesn’t. He only thinks that p-morality is h-inferior. He doesn’t believe that there’s such a thing as “inferior”.
EDIT: Hmmm… I don’t really mean that EY doesn’t believe that there’s such a thing as “inferior”. I just mean that when he uses the word “inferior” he means “h-inferior”. He doesn’t think that there’s some universal “inferior” by which we can judge p-morality against h-morality, but of course p-morality is h-inferior to h-morality.
Well, it helps, in that it clarifies your reasoning. Thanks.
That said, I continue to think that EY would reject a claim like “p-morality is h-inferior to h-morality” to the extent that its symmetrical counterpart, “h-morality is p-inferior to p-morality” is considered equivalent; I expect he would reply with some version of “No, p-morality is inferior to h-morality, which is right.”
IOW, my own understanding of EY’s position is similar to shminux’s, here: that human morality is right, and other moralities (supposing they exist at all) are not right. It seems to follow that other moralities are inferior.
But I don’t claim to be any sort of expert on the subject of EY’s beliefs, and it’s ultimately not a very important question; I’m content to agree to disagree here.
He’s saying that he uses “right” to mean the same thing everyone else does — because the “everyone else” he cares about are human and share human values. Words like “right” (and “inferior”) don’t point to something outside of human experience; they point to something within it. We are having this conversation within human experience, not outside it, so words have their human meanings — which are the only meanings we can actually refer to.
Saying “h-right” is like saying “h-Boston”. The meaning of “Boston” is already defined by humanity; you don’t have to put “h-” in front of it.
It’s just a fact about us that we do not respond to p-rightness in the same way that we respond to h-rightness, and our word “right” refers to the latter. You wouldn’t go out and do things because of those things’ p-rightness, after all. Rightness, not p-rightness, is what motivates us..
It’s part of what we are — just as we (usually) have particular priors. We don’t say “h-evidence” for “the sort of evidence that we find convincing” and contrast this with “y-evidence” which is the sort of evidence that a being who always believes statements written in yellow would find convincing. “h-evidence” is just what “evidence” means.
In general, it’s not strange at all for A and B to both agree completely with C, but disagree with each other. For example, if C says “Pie is yummy!”, B says “Pie is yummy and blueberry is the best!” and A says “Pie is yummy and cherry is the best!”
In this case, I disagree with your assertion that EY does not believe that Pebblesorter morality is inferior to human morality, an assertion fubarobfusco does not make.
I do think Eliezer is saying that Pebblesorter morality is inferior to human morality, specifically insofar as the only thing that “inferior” can refer to in this sense is also “h-inferior” — all the inferiorness that we know how to talk about is inferiorness from a human perspective, because hey, that’s what perspective we use.
He’s saying that he uses “right” to mean the same thing everyone else does — because the “everyone else” he cares about are human and share human values.
Well, again, I suspect he would instead say that he uses “right” the right way, which is unsurprisingly the way all the other people who are right use it. But that bit of nomenclature aside, yes, that’s my understanding of the position.
H-morality approximates certain (objective, mathematical) truths about things such as achieving well-being and cooperation among agents, just as human counting and adding ability approximates certain truths about natural numbers. P-morality does not approximate truths about well-being and cooperation among agents.
A creature that watches sheep passing into a sheepfold and recites, “One, two, seventeen, six, one, two …” (and imagines the actual numbers that these words refer to) is not doing counting, and a creature whose highest value is prime-numbered pebble piles is not doing morality.
Morality, in the sense of “approximating mathematical truths about things such as achieving well-being and cooperation among agents”, is not just an arbitrary provincial value; it is a Good Move. And it is a self-catalyzing Good Move: getting prime-numbered piles of pebbles does not make you more able to make more of them, but achieving well-being and cooperation among agents does make you more able to make more of it.
(EDIT: I no longer believe the above is the point of the article. Not using the retract button on account of making it hard to read is just silly.)
P-morality has a different view about well-being of agents. P-well-being consists solely of the universe having more piles of properly sorted pebbles. Hunger of agents is p-irrelevant, except that it might indirectly affect the sorting of pebbles. If a properly sorted pile of pebbles can be scattered to prevent the suffering of an agent, it p-should not be.
Conversely, h-morality considers suffering of agents to be directly h-relevant, and the sorting of piles of pebbles is only indirectly h-relevant. An agent h-should not be tortured to prevent the scattering of any pile of pebbles.
None of this provides a reason why torturing agents is objectively o-worse than scattering pebbles, so does not validate any claim to objective morality. To appeal to objective morality, we first have to accept that everything that is h-right and/or p-right may or may not be o-right. Frankly. I’m scared enough that that is the case that I would rather remain h-right and be ignorant of what is o-right than take the risk that o-right differs significantly from what is h-right. From the subjective point of view, that is even the h-right decision to make. The pebblesorters also agree- it is p-wrong to try to change to o-morality, just like it is p-wrong to change to h-morality.
If I haven’t misunderstood this comment, this is not Eliezer’s view at all. See the stuff about no universally compelling arguments though you don’t seem to be suggesting that such arguments exist, I think you are making a similar error;. a paperclip maximizer would not agree that achieving well-being and cooperation are inherently Good Moves. We would not inherently value well-being and cooperation if we had not evolved to do so. (For the sake of completeness, the fact that I phrased the previous sentence as a counterfactual should not be taken to indicate that I find it excessively likely that we did, in fact, evolve to value such things.)
I’m >.9 confident that EY would agree that with you that, supposing we do inherently value well-being and cooperation, we would not if we had not evolved to do so. I’m >.8 confident that EY would also say that valuing well-being and cooperation (in addition to other things, some of which might be more important) is right, or perhaps right, and not just “h-right”.
For my own part, I think “inherently” is a problematic word here. A sufficiently sophisticated paperclip maximizer would agree that cooperation is a Good Move, in that it can be used to increase the rate of paperclip production. I agree that cooperation is a Good Move in roughly the same way.
I agree that EY would say both those things. I did not mean to contradict either in my comment.
A sufficiently sophisticated paperclip maximizer would agree that cooperation is a Good Move, in that it can be used to increase the rate of paperclip production. I agree that cooperation is a Good Move in roughly the same way.
That is part of what I was trying to convey with the word ‘inherently’. The other part is that I think EY would say that humans do value some forms of cooperation, such as friendship, inherently, in addition to their instrumental value. I am, however, a bit less confident of that than of the things I have said about EY’s metaethical views.
Most variants of h-morality inherently value those things. Many other moralities also value those things. That does not make them objectively better than their absence. Note that the presence of values in a specified morality is a factual question, not a moral one.
Whether or not h-morality h-should value cooperation and friendship inherently is a null question. H-moralities h-should be whatever they are, by definition. Whether or not h-morality o-should do so is a question that requires understanding o-morality to answer.
So, can someone summarize why EY thinks that p-morality is inferior (not just h-inferior) to h-morality, which he seems to call a one-place function “morality”? The OP and the following discussion did not make it clear for me.
He doesn’t. He only thinks that p-morality is h-inferior. He doesn’t believe that there’s such a thing as “inferior”.
EDIT: Hmmm… I don’t really mean that EY doesn’t believe that there’s such a thing as “inferior”. I just mean that when he uses the word “inferior” he means “h-inferior”. He doesn’t think that there’s some universal “inferior” by which we can judge p-morality against h-morality, but of course p-morality is h-inferior to h-morality.
Can you expand on your reasons for believing this? It seems very unlikely to me.
Does my edit help? I can’t see how it’s very unlikely, it’s how I’ve understood the whole of the meta-ethics sequence.
Well, it helps, in that it clarifies your reasoning. Thanks.
That said, I continue to think that EY would reject a claim like “p-morality is h-inferior to h-morality” to the extent that its symmetrical counterpart, “h-morality is p-inferior to p-morality” is considered equivalent; I expect he would reply with some version of “No, p-morality is inferior to h-morality, which is right.”
IOW, my own understanding of EY’s position is similar to shminux’s, here: that human morality is right, and other moralities (supposing they exist at all) are not right. It seems to follow that other moralities are inferior.
But I don’t claim to be any sort of expert on the subject of EY’s beliefs, and it’s ultimately not a very important question; I’m content to agree to disagree here.
Oh, I think I get it now.
He’s saying that he uses “right” to mean the same thing everyone else does — because the “everyone else” he cares about are human and share human values. Words like “right” (and “inferior”) don’t point to something outside of human experience; they point to something within it. We are having this conversation within human experience, not outside it, so words have their human meanings — which are the only meanings we can actually refer to.
Saying “h-right” is like saying “h-Boston”. The meaning of “Boston” is already defined by humanity; you don’t have to put “h-” in front of it.
It’s just a fact about us that we do not respond to p-rightness in the same way that we respond to h-rightness, and our word “right” refers to the latter. You wouldn’t go out and do things because of those things’ p-rightness, after all. Rightness, not p-rightness, is what motivates us..
It’s part of what we are — just as we (usually) have particular priors. We don’t say “h-evidence” for “the sort of evidence that we find convincing” and contrast this with “y-evidence” which is the sort of evidence that a being who always believes statements written in yellow would find convincing. “h-evidence” is just what “evidence” means.
I think I agree with you, which is strange because it looks like TheOtherDave also agrees with you, but disagrees with me.
In general, it’s not strange at all for A and B to both agree completely with C, but disagree with each other. For example, if C says “Pie is yummy!”, B says “Pie is yummy and blueberry is the best!” and A says “Pie is yummy and cherry is the best!”
In this case, I disagree with your assertion that EY does not believe that Pebblesorter morality is inferior to human morality, an assertion fubarobfusco does not make.
I do think Eliezer is saying that Pebblesorter morality is inferior to human morality, specifically insofar as the only thing that “inferior” can refer to in this sense is also “h-inferior” — all the inferiorness that we know how to talk about is inferiorness from a human perspective, because hey, that’s what perspective we use.
(nods) I agree. If Oscar_Cunningham agrees as well, then we all agree.
I also agree. Yay!
I think so too. I really like the way you explained this.
Well, again, I suspect he would instead say that he uses “right” the right way, which is unsurprisingly the way all the other people who are right use it. But that bit of nomenclature aside, yes, that’s my understanding of the position.
My impression of it:
H-morality approximates certain (objective, mathematical) truths about things such as achieving well-being and cooperation among agents, just as human counting and adding ability approximates certain truths about natural numbers. P-morality does not approximate truths about well-being and cooperation among agents.
A creature that watches sheep passing into a sheepfold and recites, “One, two, seventeen, six, one, two …” (and imagines the actual numbers that these words refer to) is not doing counting, and a creature whose highest value is prime-numbered pebble piles is not doing morality.
Morality, in the sense of “approximating mathematical truths about things such as achieving well-being and cooperation among agents”, is not just an arbitrary provincial value; it is a Good Move. And it is a self-catalyzing Good Move: getting prime-numbered piles of pebbles does not make you more able to make more of them, but achieving well-being and cooperation among agents does make you more able to make more of it.
(EDIT: I no longer believe the above is the point of the article. Not using the retract button on account of making it hard to read is just silly.)
P-morality has a different view about well-being of agents. P-well-being consists solely of the universe having more piles of properly sorted pebbles. Hunger of agents is p-irrelevant, except that it might indirectly affect the sorting of pebbles. If a properly sorted pile of pebbles can be scattered to prevent the suffering of an agent, it p-should not be.
Conversely, h-morality considers suffering of agents to be directly h-relevant, and the sorting of piles of pebbles is only indirectly h-relevant. An agent h-should not be tortured to prevent the scattering of any pile of pebbles.
None of this provides a reason why torturing agents is objectively o-worse than scattering pebbles, so does not validate any claim to objective morality. To appeal to objective morality, we first have to accept that everything that is h-right and/or p-right may or may not be o-right. Frankly. I’m scared enough that that is the case that I would rather remain h-right and be ignorant of what is o-right than take the risk that o-right differs significantly from what is h-right. From the subjective point of view, that is even the h-right decision to make. The pebblesorters also agree- it is p-wrong to try to change to o-morality, just like it is p-wrong to change to h-morality.
If I haven’t misunderstood this comment, this is not Eliezer’s view at all. See the stuff about no universally compelling arguments though you don’t seem to be suggesting that such arguments exist, I think you are making a similar error;. a paperclip maximizer would not agree that achieving well-being and cooperation are inherently Good Moves. We would not inherently value well-being and cooperation if we had not evolved to do so. (For the sake of completeness, the fact that I phrased the previous sentence as a counterfactual should not be taken to indicate that I find it excessively likely that we did, in fact, evolve to value such things.)
I’m >.9 confident that EY would agree that with you that, supposing we do inherently value well-being and cooperation, we would not if we had not evolved to do so.
I’m >.8 confident that EY would also say that valuing well-being and cooperation (in addition to other things, some of which might be more important) is right, or perhaps right, and not just “h-right”.
For my own part, I think “inherently” is a problematic word here. A sufficiently sophisticated paperclip maximizer would agree that cooperation is a Good Move, in that it can be used to increase the rate of paperclip production. I agree that cooperation is a Good Move in roughly the same way.
I agree that EY would say both those things. I did not mean to contradict either in my comment.
That is part of what I was trying to convey with the word ‘inherently’. The other part is that I think EY would say that humans do value some forms of cooperation, such as friendship, inherently, in addition to their instrumental value. I am, however, a bit less confident of that than of the things I have said about EY’s metaethical views.
Most variants of h-morality inherently value those things. Many other moralities also value those things. That does not make them objectively better than their absence. Note that the presence of values in a specified morality is a factual question, not a moral one.
Whether or not h-morality h-should value cooperation and friendship inherently is a null question. H-moralities h-should be whatever they are, by definition. Whether or not h-morality o-should do so is a question that requires understanding o-morality to answer.
If so, I’ve badly slipped a meta-level.