BOOK DRAFT: ‘Ethics and Superintelligence’ (part 1)

I’m re­search­ing and writ­ing a book on meta-ethics and the tech­nolog­i­cal sin­gu­lar­ity. I plan to post the first draft of the book, in tiny parts, to the Less Wrong dis­cus­sion area. Your com­ments and con­struc­tive crit­i­cisms are much ap­pre­ci­ated.

This is not a book for a main­stream au­di­ence. Its style is that of con­tem­po­rary An­glo­phone philos­o­phy. Com­pare to, for ex­am­ple, Chalmers’ sur­vey ar­ti­cle on the sin­gu­lar­ity.

Biblio­graphic refer­ences are pro­vided here.

Part 1 is be­low...

Chap­ter 1: The tech­nolog­i­cal sin­gu­lar­ity is com­ing soon.

The Wright Brothers flew their spruce-wood plane for 200 feet in 1903. Only 66 years later, Neil Arm­strong walked on the moon, more than 240,000 miles from Earth.

The rapid pace of progress in the phys­i­cal sci­ences drives many philoso­phers to sci­ence envy. Philoso­phers have been re­search­ing the core prob­lems of meta­physics, episte­mol­ogy, and ethics for mil­len­nia and not yet come to con­sen­sus about them like sci­en­tists have for so many core prob­lems in physics, chem­istry, and biol­ogy.

I won’t ar­gue about why this is so. In­stead, I will ar­gue that main­tain­ing philos­o­phy’s slow pace and not solv­ing cer­tain philo­soph­i­cal prob­lems in the next two cen­turies may lead to the ex­tinc­tion of the hu­man species.

This ex­tinc­tion would re­sult from a “tech­nolog­i­cal sin­gu­lar­ity” in which an ar­tifi­cial in­tel­li­gence (AI) of hu­man-level gen­eral in­tel­li­gence uses its in­tel­li­gence to im­prove its own in­tel­li­gence, which would en­able it to im­prove its in­tel­li­gence even more, which would lead to an “in­tel­li­gence ex­plo­sion” feed­back loop that would give this AI in­es­timable power to ac­com­plish its goals. If so, then it is crit­i­cally im­por­tant to pro­gram its goal sys­tem wisely. This pro­ject could mean the differ­ence be­tween a utopian so­lar sys­tem of un­prece­dented har­mony and hap­piness, and a so­lar sys­tem in which all available mat­ter is con­verted into parts for a planet-sized com­puter built to solve difficult math­e­mat­i­cal prob­lems.

The tech­ni­cal challenges of de­sign­ing the goal sys­tem of such a su­per­in­tel­li­gence are daunt­ing.[1] But even if we can solve those prob­lems, the ques­tion of which goal sys­tem to give the su­per­in­tel­li­gence re­mains. It is a ques­tion of philos­o­phy; it is a ques­tion of ethics.

Philos­o­phy has im­pacted billions of hu­mans through re­li­gion, cul­ture, and gov­ern­ment. But now the stakes are even higher. When the tech­nolog­i­cal sin­gu­lar­ity oc­curs, the philos­o­phy be­hind the goal sys­tem of a su­per­in­tel­li­gent ma­chine will de­ter­mine the fate of the species, the so­lar sys­tem, and per­haps the galaxy.

***

Now that I have laid my po­si­tions on the table, I must ar­gue for them. In this chap­ter I ar­gue that the tech­nolog­i­cal sin­gu­lar­ity is likely to oc­cur within the next 200 years un­less a wor­ld­wide catas­tro­phe dras­ti­cally im­pedes sci­en­tific progress. In chap­ter two I sur­vey the philo­soph­i­cal prob­lems in­volved in de­sign­ing the goal sys­tem of a sin­gu­lar su­per­in­tel­li­gence, which I call the “sin­gle­ton.”

In chap­ter three I show how the sin­gle­ton will pro­duce very differ­ent fu­ture wor­lds de­pend­ing on which nor­ma­tive the­ory is used to de­sign its goal sys­tem. In chap­ter four I de­scribe what is per­haps the most de­vel­oped plan for the de­sign of the sin­gle­ton’s goal sys­tem: Eliezer Yud­kowsky’s “Co­her­ent Ex­trap­o­lated Vo­li­tion.” In chap­ter five, I pre­sent some ob­jec­tions to Co­her­ent Ex­trap­o­lated Vo­li­tion.

In chap­ter six I ar­gue that we can­not de­cide how to de­sign the sin­gle­ton’s goal sys­tem with­out con­sid­er­ing meta-ethics, be­cause nor­ma­tive the­ory de­pends on meta-ethics. In chap­ter seven I ar­gue that we should in­vest lit­tle effort in meta-eth­i­cal the­o­ries that do not fit well with our emerg­ing re­duc­tion­ist pic­ture of the world, just as we quickly aban­don sci­en­tific the­o­ries that don’t fit the available sci­en­tific data. I also spec­ify sev­eral meta-eth­i­cal po­si­tions that I think are good can­di­dates for aban­don­ment.

But the loom­ing prob­lem of the tech­nolog­i­cal sin­gu­lar­ity re­quires us to have a pos­i­tive the­ory, too. In chap­ter eight I pro­pose some meta-eth­i­cal claims about which I think nat­u­ral­ists should come to agree. In chap­ter nine I con­sider the im­pli­ca­tions of these plau­si­ble meta-eth­i­cal claims for the de­sign of the sin­gle­ton’s goal sys­tem.

***



[1] Th­ese tech­ni­cal challenges are dis­cussed in the liter­a­ture on ar­tifi­cial agents in gen­eral and Ar­tifi­cial Gen­eral In­tel­li­gence (AGI) in par­tic­u­lar. Rus­sell and Norvig (2009) provide a good overview of the challenges in­volved in the de­sign of ar­tifi­cial agents. Go­ertzel and Pen­nachin (2010) provide a col­lec­tion of re­cent pa­pers on the challenges of AGI. Yud­kowsky (2010) pro­poses a new ex­ten­sion of causal de­ci­sion the­ory to suit the needs of a self-mod­ify­ing AI. Yud­kowsky (2001) dis­cusses other tech­ni­cal (and philo­soph­i­cal) prob­lems re­lated to de­sign­ing the goal sys­tem of a su­per­in­tel­li­gence.