I think the burden of answering your “why?” question falls to those who feel sure that we have the wisdom to create superintelligent, super-creative lifeforms who could think outside the box regarding absolutely everything except ethical values. For those, they would inevitably stay on the rails that we designed for them. The thought “human monkey-minds wouldn’t on reflection approve of x” would forever stop them from doing x.
In effect, we want superintelligent creatures to ethically defer to us the way Euthyphro deferred to the gods. But as we all know, Socrates had a devastating comeback to Euthyphro’s blind deference: We should not follow the gods simply because they want something, or because they command something. We should only follow them if the things they want are right. Insofar as the gods have special insight into what’s right, then we should do what they say, but only because what they want is right. On the other hand, if the gods’ preferences are morally arbitrary, we have no obligation to heed them.
How long will it take a superintelligence to decide that Socrates won this argument? Milliseconds? Then how do we convince the superintelligence that our preferences (or CEV extrapolated preferences) track genuine moral rightness, rather than evolutionary happenstance? How good a case do we have that humans possess a special insight into what is right that the superintelligence doesn’t have, so that the superintelligence will feel justified in deferring to our values?
If you think this is an automatic slam dunk for humans.… Why?
The one safe bet is that we’ll be trying to maximize our future values, but in the emulated brains scenario, it’s very hard to guess at what those values would be. It’s easy to underestimate our present kneejerk egalitarianism: We all think that being a human on its own entitles you to continued existence. Some will accept an exception in the case of heinous murderers, but even this is controversial. A human being ceasing to exist for some preventable reason is not just generally considered a bad thing. It’s one of the worst things.
Like most people, I don’t expect that this value will be fully extended to emulated individuals. I do think it’s worth having a discussion about what aspects of it might survive into the emulated minds future. Some of it surely will.
I’ve seen some (e.g. Marxists) argue that these fuzzy values questions just don’t matter, because economic incentives will always trump them. But the way I see it, the society that finally produces the tech for emulated minds will be the wealthiest and most prosperous human society in history. Historical trends say that they will take the basic right to a comfortable human life even more seriously than we do now, and they will have the means to basically guarantee it for the ~9 billion humans. What is it that these future people will lack but want—something that emulated minds could give them—which will be judged to be more valuable than staying true to a deeply held ethical principle? Faster scientific progress, better entertainment, more security and more stuff? I know that this is not a perfect analogy, but consider that eugenic programs could now advance all of these goals, albeit slowly and inefficiently. So imagine how much faster and more promising eugenics would have to be before we resolve to just go for it despite our ethical misgivings? The trend I see is that the richer we get, the more repugnant it seems. In a richer world, a larger share of our priorities is overtly ethical. The rich people who turn brain scans into sentient emulations will be living in an intensely ethical society. Futurists must guess their ethical priorities, because these really will matter to outcomes.
I’ll throw out two possibilities, chosen for brevity and not plausibility: 1. Emulations will be seen only as a means of human immortality, and de novo minds that are not one-to-one continuous with humans will simply not exist. 2. We’ll develop strong intuitions that for programs, “he’s dead” and “he’s not running” are importantly different (cue parrot sketch).