Mahatma Armstrong: CEVed to death.

My main objection to Coherent Extrapolated Volition (CEV) is the “Extrapolated” part. I don’t see any reason to trust the extrapolated volition of humanity—but this isn’t just for self centred reasons. I don’t see any reason to trust my own extrapolated volition. I think it’s perfectly possible that my extrapolated volition would follow some scenario like this:

  1. It starts with me, Armstrong 1. I want to be more altruistic at the next level, valuing other humans more.

  2. The altruistic Armstrong 2 wants to be even more altruistic. He makes himself into a perfectly altruistic utilitarian towards humans, and increases his altruism towards animals.

  3. Armstrong 3 wonders about the difference between animals and humans, and why he should value one of them more. He decided to increase his altruism equally towards all sentient creatures.

  4. Armstrong 4 is worried about the fact that sentience isn’t clearly defined, and seems arbitrary anyway. He increase his altruism towards all living things.

  5. Armstrong 5′s problem is that the barrier between living and non-living things isn’t clear either (e.g. viruses). He decides that he should solve this by valuing all worthwhile things—is not art and beauty worth something as well?

  6. But what makes a thing worthwhile? Is there not art in everything, beauty in the eye of the right beholder? Armstrong 6 will make himself value everything.

  7. Armstrong 7 is in turmoil: so many animals prey upon other animals, or destroy valuable rocks! To avoid this, he decides the most moral thing he can do is to try and destroy all life, and then create a world of stasis for the objects that remain.

There are many other ways this could go, maybe ending up as a negative utilitarian or completely indifferent, but that’s enough to give the flavour. You might trust the person you want to be, to do the right things. But you can’t trust them to want to be the right person—especially several levels in (compare with the argument in this post, and my very old chaining god idea). I’m not claiming that such a value drift is inevitable, just that it’s possible—and so I’d want my initial values to dominate when there is a large conflict.

Nor do I give Armstrong 7′s values any credit for having originated from mine. Under torture, I’m pretty sure I could be made to accept any system of values whatsoever; there are other ways that would provably alter my values, so I don’t see any reason to privilege Armstrong 7′s values in this way.

“But,” says the objecting strawman, “this is completely different! Armstrong 7′s values are the ones that you would reach by following the path you would want to follow anyway! That’s where you would get to, if you started out wanting to be more altruistic, had control over you own motivational structure, and grew and learnt and knew more!”

“Thanks for pointing that out,” I respond, “now that I know where that ends up, I must make sure to change the path I would want to follow! I’m not sure whether I shouldn’t be more altruistic, or avoid touching my motivational structure, or not want to grow or learn or know more. Those all sound pretty good, but if they end up at Armstrong 7, something’s going to have to give.”