H-morality approximates certain (objective, mathematical) truths about things such as achieving well-being and cooperation among agents, just as human counting and adding ability approximates certain truths about natural numbers. P-morality does not approximate truths about well-being and cooperation among agents.
A creature that watches sheep passing into a sheepfold and recites, “One, two, seventeen, six, one, two …” (and imagines the actual numbers that these words refer to) is not doing counting, and a creature whose highest value is prime-numbered pebble piles is not doing morality.
Morality, in the sense of “approximating mathematical truths about things such as achieving well-being and cooperation among agents”, is not just an arbitrary provincial value; it is a Good Move. And it is a self-catalyzing Good Move: getting prime-numbered piles of pebbles does not make you more able to make more of them, but achieving well-being and cooperation among agents does make you more able to make more of it.
(EDIT: I no longer believe the above is the point of the article. Not using the retract button on account of making it hard to read is just silly.)
P-morality has a different view about well-being of agents. P-well-being consists solely of the universe having more piles of properly sorted pebbles. Hunger of agents is p-irrelevant, except that it might indirectly affect the sorting of pebbles. If a properly sorted pile of pebbles can be scattered to prevent the suffering of an agent, it p-should not be.
Conversely, h-morality considers suffering of agents to be directly h-relevant, and the sorting of piles of pebbles is only indirectly h-relevant. An agent h-should not be tortured to prevent the scattering of any pile of pebbles.
None of this provides a reason why torturing agents is objectively o-worse than scattering pebbles, so does not validate any claim to objective morality. To appeal to objective morality, we first have to accept that everything that is h-right and/or p-right may or may not be o-right. Frankly. I’m scared enough that that is the case that I would rather remain h-right and be ignorant of what is o-right than take the risk that o-right differs significantly from what is h-right. From the subjective point of view, that is even the h-right decision to make. The pebblesorters also agree- it is p-wrong to try to change to o-morality, just like it is p-wrong to change to h-morality.
If I haven’t misunderstood this comment, this is not Eliezer’s view at all. See the stuff about no universally compelling arguments though you don’t seem to be suggesting that such arguments exist, I think you are making a similar error;. a paperclip maximizer would not agree that achieving well-being and cooperation are inherently Good Moves. We would not inherently value well-being and cooperation if we had not evolved to do so. (For the sake of completeness, the fact that I phrased the previous sentence as a counterfactual should not be taken to indicate that I find it excessively likely that we did, in fact, evolve to value such things.)
I’m >.9 confident that EY would agree that with you that, supposing we do inherently value well-being and cooperation, we would not if we had not evolved to do so. I’m >.8 confident that EY would also say that valuing well-being and cooperation (in addition to other things, some of which might be more important) is right, or perhaps right, and not just “h-right”.
For my own part, I think “inherently” is a problematic word here. A sufficiently sophisticated paperclip maximizer would agree that cooperation is a Good Move, in that it can be used to increase the rate of paperclip production. I agree that cooperation is a Good Move in roughly the same way.
I agree that EY would say both those things. I did not mean to contradict either in my comment.
A sufficiently sophisticated paperclip maximizer would agree that cooperation is a Good Move, in that it can be used to increase the rate of paperclip production. I agree that cooperation is a Good Move in roughly the same way.
That is part of what I was trying to convey with the word ‘inherently’. The other part is that I think EY would say that humans do value some forms of cooperation, such as friendship, inherently, in addition to their instrumental value. I am, however, a bit less confident of that than of the things I have said about EY’s metaethical views.
Most variants of h-morality inherently value those things. Many other moralities also value those things. That does not make them objectively better than their absence. Note that the presence of values in a specified morality is a factual question, not a moral one.
Whether or not h-morality h-should value cooperation and friendship inherently is a null question. H-moralities h-should be whatever they are, by definition. Whether or not h-morality o-should do so is a question that requires understanding o-morality to answer.
My impression of it:
H-morality approximates certain (objective, mathematical) truths about things such as achieving well-being and cooperation among agents, just as human counting and adding ability approximates certain truths about natural numbers. P-morality does not approximate truths about well-being and cooperation among agents.
A creature that watches sheep passing into a sheepfold and recites, “One, two, seventeen, six, one, two …” (and imagines the actual numbers that these words refer to) is not doing counting, and a creature whose highest value is prime-numbered pebble piles is not doing morality.
Morality, in the sense of “approximating mathematical truths about things such as achieving well-being and cooperation among agents”, is not just an arbitrary provincial value; it is a Good Move. And it is a self-catalyzing Good Move: getting prime-numbered piles of pebbles does not make you more able to make more of them, but achieving well-being and cooperation among agents does make you more able to make more of it.
(EDIT: I no longer believe the above is the point of the article. Not using the retract button on account of making it hard to read is just silly.)
P-morality has a different view about well-being of agents. P-well-being consists solely of the universe having more piles of properly sorted pebbles. Hunger of agents is p-irrelevant, except that it might indirectly affect the sorting of pebbles. If a properly sorted pile of pebbles can be scattered to prevent the suffering of an agent, it p-should not be.
Conversely, h-morality considers suffering of agents to be directly h-relevant, and the sorting of piles of pebbles is only indirectly h-relevant. An agent h-should not be tortured to prevent the scattering of any pile of pebbles.
None of this provides a reason why torturing agents is objectively o-worse than scattering pebbles, so does not validate any claim to objective morality. To appeal to objective morality, we first have to accept that everything that is h-right and/or p-right may or may not be o-right. Frankly. I’m scared enough that that is the case that I would rather remain h-right and be ignorant of what is o-right than take the risk that o-right differs significantly from what is h-right. From the subjective point of view, that is even the h-right decision to make. The pebblesorters also agree- it is p-wrong to try to change to o-morality, just like it is p-wrong to change to h-morality.
If I haven’t misunderstood this comment, this is not Eliezer’s view at all. See the stuff about no universally compelling arguments though you don’t seem to be suggesting that such arguments exist, I think you are making a similar error;. a paperclip maximizer would not agree that achieving well-being and cooperation are inherently Good Moves. We would not inherently value well-being and cooperation if we had not evolved to do so. (For the sake of completeness, the fact that I phrased the previous sentence as a counterfactual should not be taken to indicate that I find it excessively likely that we did, in fact, evolve to value such things.)
I’m >.9 confident that EY would agree that with you that, supposing we do inherently value well-being and cooperation, we would not if we had not evolved to do so.
I’m >.8 confident that EY would also say that valuing well-being and cooperation (in addition to other things, some of which might be more important) is right, or perhaps right, and not just “h-right”.
For my own part, I think “inherently” is a problematic word here. A sufficiently sophisticated paperclip maximizer would agree that cooperation is a Good Move, in that it can be used to increase the rate of paperclip production. I agree that cooperation is a Good Move in roughly the same way.
I agree that EY would say both those things. I did not mean to contradict either in my comment.
That is part of what I was trying to convey with the word ‘inherently’. The other part is that I think EY would say that humans do value some forms of cooperation, such as friendship, inherently, in addition to their instrumental value. I am, however, a bit less confident of that than of the things I have said about EY’s metaethical views.
Most variants of h-morality inherently value those things. Many other moralities also value those things. That does not make them objectively better than their absence. Note that the presence of values in a specified morality is a factual question, not a moral one.
Whether or not h-morality h-should value cooperation and friendship inherently is a null question. H-moralities h-should be whatever they are, by definition. Whether or not h-morality o-should do so is a question that requires understanding o-morality to answer.
If so, I’ve badly slipped a meta-level.