Not Taking Over the World

Followup to: What I Think, If Not Why

My esteemed co-blogger Robin Hanson accuses me of trying to take over the world.

Why, oh why must I be so misunderstood?

(Well, it’s not like I don’t enjoy certain misunderstandings. Ah, I remember the first time someone seriously and not in a joking way accused me of trying to take over the world. On that day I felt like a true mad scientist, though I lacked a castle and hunchbacked assistant.)

But if you’re working from the premise of a hard takeoff—an Artificial Intelligence that self-improves at an extremely rapid rate—and you suppose such extra-ordinary depth of insight and precision of craftsmanship that you can actually specify the AI’s goal system instead of automatically failing -

- then it takes some work to come up with a way not to take over the world.

Robin talks up the drama inherent in the intelligence explosion, presumably because he feels that this is a primary source of bias. But I’ve got to say that Robin’s dramatic story, does not sound like the story I tell of myself. There, the drama comes from tampering with such extreme forces that every single idea you invent is wrong. The standardized Final Apocalyptic Battle of Good Vs. Evil would be trivial by comparison; then all you have to do is put forth a desperate effort. Facing an adult problem in a neutral universe isn’t so straightforward. Your enemy is yourself, who will automatically destroy the world, or just fail to accomplish anything, unless you can defeat you. - That is the drama I crafted into the story I tell myself, for I too would disdain anything so cliched as Armageddon.

So, Robin, I’ll ask you something of a probing question. Let’s say that someone walks up to you and grants you unlimited power.

What do you do with it, so as to not take over the world?

Do you say, “I will do nothing—I take the null action”?

But then you have instantly become a malevolent God, as Epicurus said:

Is God willing to prevent evil, but not able? Then he is not omnipotent.
Is he able, but not willing? Then he is malevolent.
Is both able, and willing? Then whence cometh evil?
Is he neither able nor willing? Then why call him God.

Peter Norvig said, “Refusing to act is like refusing to allow time to pass.” The null action is also a choice. So have you not, in refusing to act, established all sick people as sick, established all poor people as poor, ordained all in despair to continue in despair, and condemned the dying to death? Will you not be, until the end of time, responsible for every sin committed?

Well, yes and no. If someone says, “I don’t trust myself not to destroy the world, therefore I take the null action,” then I would tend to sigh and say, “If that is so, then you did the right thing.” Afterward, murderers will still be responsible for their murders, and altruists will still be creditable for the help they give.

And to say that you used your power to take over the world by doing nothing to it, seems to stretch the ordinary meaning of the phrase.

But it wouldn’t be the best thing you could do with unlimited power, either.

With “unlimited power” you have no need to crush your enemies. You have no moral defense if you treat your enemies with less than the utmost consideration.

With “unlimited power” you cannot plead the necessity of monitoring or restraining others so that they do not rebel against you. If you do such a thing, you are simply a tyrant who enjoys power, and not a defender of the people.

Unlimited power removes a lot of moral defenses, really. You can’t say “But I had to.” You can’t say “Well, I wanted to help, but I couldn’t.” The only excuse for not helping is if you shouldn’t, which is harder to establish.

And let us also suppose that this power is wieldable without side effects or configuration constraints; it is wielded with unlimited precision.

For example, you can’t take refuge in saying anything like: “Well, I built this AI, but any intelligence will pursue its own interests, so now the AI will just be a Ricardian trading partner with humanity as it pursues its own goals.” Say, the programming team has cracked the “hard problem of conscious experience” in sufficient depth that they can guarantee that the AI they create is not sentient—not a repository of pleasure, or pain, or subjective experience, or any interest-in-self—and hence, the AI is only a means to an end, and not an end in itself.

And you cannot take refuge in saying, “In invoking this power, the reins of destiny have passed out of my hands, and humanity has passed on the torch.” Sorry, you haven’t created a new person yet—not unless you deliberately invoke the unlimited power to do so—and then you can’t take refuge in the necessity of it as a side effect; you must establish that it is the right thing to do.

The AI is not necessarily a trading partner. You could make it a nonsentient device that just gave you things, if you thought that were wiser.

You cannot say, “The law, in protecting the rights of all, must necessarily protect the right of Fred the Deranged to spend all day giving himself electrical shocks.” The power is wielded with unlimited precision; you could, if you wished, protect the rights of everyone except Fred.

You cannot take refuge in the necessity of anything—that is the meaning of unlimited power.

We will even suppose (for it removes yet more excuses, and hence reveals more of your morality) that you are not limited by the laws of physics as we know them. You are bound to deal only in finite numbers, but not otherwise bounded. This is so that we can see the true constraints of your morality, apart from your being able to plead constraint by the environment.

In my reckless youth, I used to think that it might be a good idea to flash-upgrade to the highest possible level of intelligence you could manage on available hardware. Being smart was good, so being smarter was better, and being as smart as possible as quickly as possible was best—right?

But when I imagined having infinite computing power available, I realized that no matter how large a mind you made yourself, you could just go on making yourself larger and larger and larger. So that wasn’t an answer to the purpose of life. And only then did it occur to me to ask after eudaimonic rates of intelligence increase, rather than just assuming you wanted to immediately be as smart as possible.

Considering the infinite case moved me to change the way I considered the finite case. Before, I was running away from the question by saying “More!” But considering an unlimited amount of ice cream forced me to confront the issue of what to do with any of it.

Similarly with population: If you invoke the unlimited power to create a quadrillion people, then why not a quintillion? If 3^^^3, why not 3^^^^3? So you can’t take refuge in saying, “I will create more people—that is the difficult thing, and to accomplish it is the main challenge.” What is individually a life worth living?

You can say, “It’s not my place to decide; I leave it up to others” but then you are responsible for the consequences of that decision as well. You should say, at least, how this differs from the null act.

So, Robin, reveal to us your character: What would you do with unlimited power?