Not Taking Over the World

Fol­lowup to: What I Think, If Not Why

My es­teemed co-blog­ger Robin Han­son ac­cuses me of try­ing to take over the world.

Why, oh why must I be so mi­s­un­der­stood?

(Well, it’s not like I don’t en­joy cer­tain mi­s­un­der­stand­ings. Ah, I re­mem­ber the first time some­one se­ri­ously and not in a jok­ing way ac­cused me of try­ing to take over the world. On that day I felt like a true mad sci­en­tist, though I lacked a cas­tle and hunch­backed as­sis­tant.)

But if you’re work­ing from the premise of a hard take­off—an Ar­tifi­cial In­tel­li­gence that self-im­proves at an ex­tremely rapid rate—and you sup­pose such ex­tra-or­di­nary depth of in­sight and pre­ci­sion of crafts­man­ship that you can ac­tu­ally spec­ify the AI’s goal sys­tem in­stead of au­to­mat­i­cally failing -

- then it takes some work to come up with a way not to take over the world.

Robin talks up the drama in­her­ent in the in­tel­li­gence ex­plo­sion, pre­sum­ably be­cause he feels that this is a pri­mary source of bias. But I’ve got to say that Robin’s dra­matic story, does not sound like the story I tell of my­self. There, the drama comes from tam­per­ing with such ex­treme forces that ev­ery sin­gle idea you in­vent is wrong. The stan­dard­ized Fi­nal Apoca­lyp­tic Bat­tle of Good Vs. Evil would be triv­ial by com­par­i­son; then all you have to do is put forth a des­per­ate effort. Fac­ing an adult prob­lem in a neu­tral uni­verse isn’t so straight­for­ward. Your en­emy is your­self, who will au­to­mat­i­cally de­stroy the world, or just fail to ac­com­plish any­thing, un­less you can defeat you. - That is the drama I crafted into the story I tell my­self, for I too would dis­dain any­thing so cliched as Ar­maged­don.

So, Robin, I’ll ask you some­thing of a prob­ing ques­tion. Let’s say that some­one walks up to you and grants you un­limited power.

What do you do with it, so as to not take over the world?

Do you say, “I will do noth­ing—I take the null ac­tion”?

But then you have in­stantly be­come a malev­olent God, as Epicu­rus said:

Is God will­ing to pre­vent evil, but not able? Then he is not om­nipo­tent.
Is he able, but not will­ing? Then he is malev­olent.
Is both able, and will­ing? Then whence cometh evil?
Is he nei­ther able nor will­ing? Then why call him God.

Peter Norvig said, “Re­fus­ing to act is like re­fus­ing to al­low time to pass.” The null ac­tion is also a choice. So have you not, in re­fus­ing to act, es­tab­lished all sick peo­ple as sick, es­tab­lished all poor peo­ple as poor, or­dained all in de­spair to con­tinue in de­spair, and con­demned the dy­ing to death? Will you not be, un­til the end of time, re­spon­si­ble for ev­ery sin com­mit­ted?

Well, yes and no. If some­one says, “I don’t trust my­self not to de­stroy the world, there­fore I take the null ac­tion,” then I would tend to sigh and say, “If that is so, then you did the right thing.” After­ward, mur­der­ers will still be re­spon­si­ble for their mur­ders, and al­tru­ists will still be cred­itable for the help they give.

And to say that you used your power to take over the world by do­ing noth­ing to it, seems to stretch the or­di­nary mean­ing of the phrase.

But it wouldn’t be the best thing you could do with un­limited power, ei­ther.

With “un­limited power” you have no need to crush your en­e­mies. You have no moral defense if you treat your en­e­mies with less than the ut­most con­sid­er­a­tion.

With “un­limited power” you can­not plead the ne­ces­sity of mon­i­tor­ing or re­strain­ing oth­ers so that they do not rebel against you. If you do such a thing, you are sim­ply a tyrant who en­joys power, and not a defen­der of the peo­ple.

Un­limited power re­moves a lot of moral defenses, re­ally. You can’t say “But I had to.” You can’t say “Well, I wanted to help, but I couldn’t.” The only ex­cuse for not helping is if you shouldn’t, which is harder to es­tab­lish.

And let us also sup­pose that this power is wield­able with­out side effects or con­figu­ra­tion con­straints; it is wielded with un­limited pre­ci­sion.

For ex­am­ple, you can’t take re­fuge in say­ing any­thing like: “Well, I built this AI, but any in­tel­li­gence will pur­sue its own in­ter­ests, so now the AI will just be a Ri­car­dian trad­ing part­ner with hu­man­ity as it pur­sues its own goals.” Say, the pro­gram­ming team has cracked the “hard prob­lem of con­scious ex­pe­rience” in suffi­cient depth that they can guaran­tee that the AI they cre­ate is not sen­tient—not a repos­i­tory of plea­sure, or pain, or sub­jec­tive ex­pe­rience, or any in­ter­est-in-self—and hence, the AI is only a means to an end, and not an end in it­self.

And you can­not take re­fuge in say­ing, “In in­vok­ing this power, the reins of des­tiny have passed out of my hands, and hu­man­ity has passed on the torch.” Sorry, you haven’t cre­ated a new per­son yet—not un­less you de­liber­ately in­voke the un­limited power to do so—and then you can’t take re­fuge in the ne­ces­sity of it as a side effect; you must es­tab­lish that it is the right thing to do.

The AI is not nec­es­sar­ily a trad­ing part­ner. You could make it a non­sen­tient de­vice that just gave you things, if you thought that were wiser.

You can­not say, “The law, in pro­tect­ing the rights of all, must nec­es­sar­ily pro­tect the right of Fred the Deranged to spend all day giv­ing him­self elec­tri­cal shocks.” The power is wielded with un­limited pre­ci­sion; you could, if you wished, pro­tect the rights of ev­ery­one ex­cept Fred.

You can­not take re­fuge in the ne­ces­sity of any­thing—that is the mean­ing of un­limited power.

We will even sup­pose (for it re­moves yet more ex­cuses, and hence re­veals more of your moral­ity) that you are not limited by the laws of physics as we know them. You are bound to deal only in finite num­bers, but not oth­er­wise bounded. This is so that we can see the true con­straints of your moral­ity, apart from your be­ing able to plead con­straint by the en­vi­ron­ment.

In my reck­less youth, I used to think that it might be a good idea to flash-up­grade to the high­est pos­si­ble level of in­tel­li­gence you could man­age on available hard­ware. Be­ing smart was good, so be­ing smarter was bet­ter, and be­ing as smart as pos­si­ble as quickly as pos­si­ble was best—right?

But when I imag­ined hav­ing in­finite com­put­ing power available, I re­al­ized that no mat­ter how large a mind you made your­self, you could just go on mak­ing your­self larger and larger and larger. So that wasn’t an an­swer to the pur­pose of life. And only then did it oc­cur to me to ask af­ter eu­daimonic rates of in­tel­li­gence in­crease, rather than just as­sum­ing you wanted to im­me­di­ately be as smart as pos­si­ble.

Con­sid­er­ing the in­finite case moved me to change the way I con­sid­ered the finite case. Be­fore, I was run­ning away from the ques­tion by say­ing “More!” But con­sid­er­ing an un­limited amount of ice cream forced me to con­front the is­sue of what to do with any of it.

Similarly with pop­u­la­tion: If you in­voke the un­limited power to cre­ate a quadrillion peo­ple, then why not a quin­til­lion? If 3^^^3, why not 3^^^^3? So you can’t take re­fuge in say­ing, “I will cre­ate more peo­ple—that is the difficult thing, and to ac­com­plish it is the main challenge.” What is in­di­vi­d­u­ally a life worth liv­ing?

You can say, “It’s not my place to de­cide; I leave it up to oth­ers” but then you are re­spon­si­ble for the con­se­quences of that de­ci­sion as well. You should say, at least, how this differs from the null act.

So, Robin, re­veal to us your char­ac­ter: What would you do with un­limited power?