Criticisms of the Metaethics

I’ll admit that I’m using the LessWrong Board to try and figure out flaws in my own philosophical ideas. I should also make a disclaimer that I do not dispute the usefulness of Eliezer’s ideas for the purposes of building a Friendy AI.

My criticisms are designed for other purposes- namely, that contrary to what I am led to believe most of this site believes Eliezer’s metaethics does not work for solving ethical dilemnas except as a set of arbitrary rules, and in no way the stand-out best choice compared to any other self-consistent deontological or consequentialist system.

I’ll also admit that I have something of a bias, for those looking in- I find an interesting intellectual challenge to look through philosophies and find weak points in them, so I may have been over-eager to find a bias that doesn’t exist. I have been attempting to find an appropriate flaw for some time as some of my posts may have foreshadowed.

Finally, I will note that I am also attempting to dodge attacks on Elizier’s ethics despite it’s connections to Eliezer’s epistemology.

---------------------------------------

1: My Basic Argument

Typically, people ask two things out of ethics- a reason to be ethical in the first place, and a way to resolve ethical dilemnas. Eliezer gets around the former by, effectively, appealing to the fact that people want to be moral even if there is no universially compelling argument.

The problem with Eliezer’s metaethics are based around what I call the A-case after the character I invented to be in it the first time I thought up this idea. A has two options- Option 1 is the best choice from a Consequentialist perspective and A is smart enough to figure that out. However, following Option 1 would make A feel very guilty for some reason (which A cannot overcome merely by thinking about it), whereas Option 2 would feel morally right on an emotive level.

This, of course, implies that A is not greatly influenced by consequentialism- but that’s quite plausible. Perhaps you have to be irrational to an intelligent non-consequentialist, but an irrational non-consequentialist smart enough to perform a utility calculation as a theoretical exercise is plausible.

How can we say that the right thing for A to do is Option 1, in such a way as be both rational and in any way convincing to A? From the premises, it is likely that any possible argument will be rejected by A in such a manner that you can’t claim A is being irrational.

This can also be used against any particular deontological code- in fact more effectively due to greater plausibility- by substituting it for Consequentialism and claiming that according to said code it is A’s moral duty. You can define should all you like, but A is using a different definition of should (not part of the opening scenario, but a safe inference except for a few unusual philosophers). You are talking about two different things.

-----------------------------------------------------------------

2: Addressing Counterarguments

i:

It could be argued that A has a rightness function which, on reflection, will lead A to embrace consequentialism as best for humanity as a whole. This is, however, not necessarily correct- to use an extreme case, what if A is being asked to kill A’s own innocent lover, or her own baby? (“Her” because it’s likely a much stronger intution that way) Some people in A’s posistion have said rightness functions- it is easily possible A does not.

In addition, a follower of Lesswrong morality in it’s standard form has a dilemna here. If you say that A is still morally obliged to kill her own baby, then Eleizer’s own arguments can be turned against you- still pulling a child off the traintracks regardless of any ‘objective’ right. If you say she isn’t, you’ve conceded the case.

A deontological theory is either founded on intuitions or not. If not Hume’s Is-Ought distinction refutes it. If it is, then it faces similiar dilemnas in scenarios like this. Intuitions, however, do not add up to a logically consistent philosophy- “moral luck” (the idea a person can be more or less morally responsible based on factors outside their control) feels like an oxymoron at first, but many intuitions depend on it.

ii:

One possible counteragument is that A wants to do things in the world, and merely following A’s feelings turns A into a morality pump making actions which don’t make sense. However, there are several problems with this.

i- A’s actions probably make sense from the perspective of “Make A feel morally justified”. A can’t self-modify (at least not directly), after all.

ii- Depending on the strengths of the emotions, A does not necessarily care even if A is aware of the inconsistencies in their actions. There are plenty of possible cases- a person dealing with those with whom they have close emotional ties, biases related to race or physical attractiveness, condeming large numbers of innocents to death etc.

iii:

A final counterargument would be that the way to solve this is through a Coherentist style Reflective Equilibrium. Even if Coherentism is not epistemically true, by treating intuitions as if it were true and following the Coherentist philosophy the result could feel satisfying. The problem is- what if it doesn’t? If a person’s emotions are strong enough, no amount of Reflective Equilibrium is strong enough to contradict them.

If you take an emotivist posistion, however, you have the problem Emotivism has no solution when feelings contradict each other.

------------------------------------------------------------------

3: Conclusions

My contention here is that we have a serious problem. The concept of right and wrong is like the concept of personal identity- merely something to be abolished for a more accurate view of what exists. It can be replaced with “Wants” (for people who have a unified moral system but various feelings), “Moralities” (systematic moral codes which are internally coherent), and “Pseudo-Moralities” with no objective morality even in the Yudowskyite sense existing.

A delusion exists of morality in most human minds, of course- just as a delusion exists of personal identity in most if not all human minds. “Moralities” can still exist in terms of groups of entities who all want similiar things or agree with basic moral rules, that can be taken to their logical conclusions.

Why can that not lead to morality? It can, but if you accept a morality on that basis it implies that rational argument (as opposed to emotional argument, which is a different matter) is in many cases entirely impossible with humans with different moralities, just as it is with aliens.

This leaves two types of rational argument possible about ethical questions:

-Demonstrating that a person would want something different if they knew all the facts- whether facts such as “God doesn’t exist”, facts such as “This action won’t have the consequences you think it will”, or facts about the human psyche.

-Showing a person’s Morality has internal inconsistencies, which in most people will mean they discard it. (With mere moral Wants this is more debatable)

Arguably it also leads a third- demonstrating to a person that they do not really want what they think they want. However, this is a philosophical can of worms which I don’t want to open up (metaphorically speaking) because it is highly complicated (I can think of plenty of arguments against the possibility of such, even if I am not so convinced they are true as to assert it) and because solving it does not contribute much to the main issue.

Eliezer’s morality cannot even work out on that basis, however. In any scenario where an individual B:

i- Acts against Eliezer’s moral code

ii- Feels morally right about doing so, and would have felt guilty for following Eliezer’s ideas

Then they can argue against somebody trying to use Eliezer’s ideas against them by pointing out that regardless of any Objective Morality, Eliezer still has a good case for dragging children off train tracks.

I will not delve into what proportion of humans can be said to make up a single Morality due to having basically similiar premises and intuitions. Although there are reasons to doubt it is as large as you’d think (take the A case), I’m not sure if it would work.

In conclusion- there is no Universially Compelling argument amongst humans, or even amongst rational humans.