Friendly AI—Being good vs. having great sex

[...] I think an LW post is important and interesting in proportion to how much it helps construct a Friendly AI, how much it gets people to participate in the human project [...]

-- Eliezer Yudkowsky

I’m not going to wait for philosophers to cover this issue correctly, or for use in FAI design.

-- Luke Muehlhauser


The above quotes hint at the possibility that some of the content that can be found on lesswrong.com has been written in support of friendly AI research.

My question, of what importance is ethics when it comes to friendly AI research? If a friendly AI is one that does protect and cultivate human values, how does ethics help to achieve this?

Let’s assume that there exist some sort of objective right, no matter what that actually means. If humans desire to be right, isn’t it the sort of human value that a friendly AI would seek to protect and cultivate?

What difference is there between wanting to be good and wanting to have a lot of great sex? Both seem to be values that humans might desire, therefore both values have to be taken into account by a friendly AI.

If a friendly AI has to be able to extrapolate the coherent volition of humanity, without any hard-coded knowledge of human values, why doesn’t this extent to ethics as well?

If we have to solve ethics before being able to design friendly AI, if we have to hard-code what it means to be good, how doesn’t this apply to what it means to have great sex as well (or what it means to have sex anyway)?

If a friendly AI is going to figure out what humans desire, by extrapolating their volition, might it conclude that our volition is immoral and therefore undesirable?