Question about metaethics

In a recent Facebook post, Eliezer said :

You can believe that most possible minds within mind design space (not necessarily actual ones, but possible ones) which are smart enough to build a Dyson Sphere, will completely fail to respond to or care about any sort of moral arguments you use, without being any sort of moral relativist. Yes. Really. Believing that a paperclip maximizer won’t respond to the arguments you’re using doesn’t mean that you think that every species has its own values and no values are better than any other.

And so I think part of the metaethics sequence went over my head.

I should re-read it, but I haven’t yet. In the meantime I want to give an summary of my current thinking and ask some questions.

My current take on morality is that, unlike facts about the world, morality is a question of preference. The important caveats are :

  1. The preference set has to be consistent. Until we develop something akin to CEV, humans are probably stuck with a pre-morality where they behave and think over time in contradictory ways, and at the same time believe they have a perfectly consistent moral system.

  2. One can be mistaken about morality, but only in the sense that, unknown to them, they actually hold values different from what the deliberative part of their mind thinks it holds. An introspection failure or a logical error can cause the mistake. Once we identify ground values (not that it’s effectively feasible), “wrong” is a type error.

  3. It is OK to fight for one’s morality. Just because it’s subjective doesn’t mean one can’t push for it. So “moral relativism” in the strong sense isn’t a consequence of morality being a preference. But “moral relativism” in the weak, technical sense (it’s subjective) is.

I am curious about the following :

  • How does your current view differ from what I’ve written above?

  • How exactly does that differ from the thesis of the metaethics sequence? In the same post, Eliezer also said : “and they thought maybe I was arguing for moral realism...”. I did kind of think that, at times.

  • I specifically do not understand this : “Believing that a paperclip maximizer won’t respond to the arguments you’re using doesn’t mean that you think that every species has its own values and no values are better than any other.”. Unless “better” is used in the sense of “better according to my morality”, but that would make the sentence barely worth saying.