The Singularity Wars

(This is a in­tro­duc­tion, for those not im­mersed in the Sin­gu­lar­ity world, into the his­tory of and re­la­tion­ships be­tween SU, SIAI [SI, MIRI], SS, LW, CSER, FHI, and CFAR. It also has some opinions, which are strictly my own.)

The good news is that there were no Sin­gu­lar­ity Wars.

The Bay Area had a Sin­gu­lar­ity Univer­sity and a Sin­gu­lar­ity In­sti­tute, each go­ing in a very differ­ent di­rec­tion. You’d ex­pect to see some­thing like the Peo­ple’s Front of Judea and the Judean Peo­ple’s Front, burn­ing each other’s grain sup­plies as the Ro­mans moved in.

The Sin­gu­lar­ity In­sti­tute for Ar­tifi­cial In­tel­li­gence was founded first, in 2000, by Eliezer Yud­kowsky.

Sin­gu­lar­ity Univer­sity was founded in 2008. Ray Kurzweil, the driv­ing force be­hind SU, was also ac­tive in SIAI, serv­ing on its board in vary­ing ca­pac­i­ties in the years up to 2010.

SIAI’s multi-part name was clunky, and their do­main, sin­ginst.org, un­mem­o­rable. I kept ac­ci­den­tally vis­it­ing siai.org for months, but it be­longed to the Self In­surance As­so­ci­a­tion of Illinois. (The cool new do­main name sin­gu­lar­ity.org, re­cently ac­quired af­ter a rather un­in­spired site ap­peared there for sev­eral years, ar­rived shortly be­fore it was no longer rele­vant.) All the bet­ter to con­fuse you with, SIAI has been go­ing for the last few years by the short­ened name Sin­gu­lar­ity In­sti­tute, ab­bre­vi­ated SI.

The an­nual Sin­gu­lar­i­tySum­mit was launched by SI, to­gether with Kurzweil, in 2006. SS was SI’s pre­mier PR mechanism, mus­ter­ing geek heroes to give their tacit en­dorse­ment for SI’s se­ri­ous­ness, if not its views, by agree­ing to ap­pear on-stage.

The Sin­gu­lar­ity Sum­mit was always off-topic for SI: more SU-like than SI-like. Speak­ers spoke about what­ever tech­nolog­i­cally-ad­vanced ideas in­ter­ested them. Oc­ca­sional SI rep­re­sen­ta­tives spoke about the In­tel­li­gence Ex­plo­sion, but they too would of­ten stray into other ar­eas like ra­tio­nal­ity and the sci­en­tific pro­cess. Yet SS re­mained firmly in SI’s hands.

It be­came clear over the years that SU and SI have al­most noth­ing to do with each other ex­cept for the word “Sin­gu­lar­ity.” The word has three ma­jor mean­ings, and of these, Yud­kowsky fa­vored the In­tel­li­gence Ex­plo­sion while Kurzweil pushed Ac­cel­er­at­ing Change.

But ac­tu­ally, SU’s ac­tivi­ties have lit­tle to do with the Sin­gu­lar­ity, even un­der Kurzweil’s defi­ni­tion. Kurzweil writes of a fu­ture, around the 2040s, in which the hu­man con­di­tion is al­tered be­yond recog­ni­tion. But SU mostly deals with whizzy next-gen tech­nol­ogy. They are do­ing some­thing im­por­tant, en­courag­ing tech­nolog­i­cal ad­vance­ment with a fo­cus on helping hu­man­ity, but they spend lit­tle time work­ing on op­ti­miz­ing the end of our hu­man ex­is­tence as we know it. Yud­kowsky calls what they do “tech­noyay.” And maybe that’s what the Sin­gu­lar­ity means, nowa­days. Time to stop us­ing the word.

(I’ve also heard SU grad­u­ates say­ing “I was at Sin­gu­lar­ity last week,” on the pat­tern of “I was at Har­vard last week,” elid­ing “Univer­sity.” I think that that counts as the end of Sin­gu­lar­ity as we know it.)

You might ex­pect SU and SI to get in a stupid squab­ble about the name. Peo­ple love fight­ing over words. But to ev­ery­one’s credit, I didn’t hear squab­bling, just con­fu­sion from those who were not in the know. Or you might ex­pect SI to give up, change its name and close down the Sin­gu­lar­ity Sum­mit. But lo and be­hold, SU and SI set­tled the mat­ter sen­si­bly, am­i­ca­ble, in fact … ra­tio­nally. SU bought the Sum­mit and the en­tire “Sin­gu­lar­ity” brand from SI—for money! Yes! Coase rules!

SI chose the new name Ma­chine In­tel­li­gence Re­search In­sti­tute. I like it.

The term “Ar­tifi­cial In­tel­li­gence” got burned out in the AI Win­ter in the early 1990′s. The term has been firmly taboo since then, even in the soft­ware in­dus­try, even in the lead­ing edge of the soft­ware in­dus­try. I did tech­ni­cal evan­ge­lism for Uni­corn, a lead­ing in­dus­trial on­tol­ogy soft­ware startup, and the phrase “Ar­tifi­cial In­tel­li­gence” was most definitely out of bounds. The term was not used even in­side the com­pany. This was de­spite a founder with a CoSci PhD, and a co-founder with a mas­ters in AI.

The rarely-used term “Ma­chine In­tel­li­gence” throws off that bag­gage, and so, SI man­aged to ditch two taboo words at once.

The MIRI name is per­haps too broad. It could serve for any AI re­search group. The Ma­chine In­tel­li­gence Re­search In­sti­tute fo­cuses on de­creas­ing the chances of a nega­tive In­tel­li­gence Ex­plo­sion and in­creas­ing the chances of a pos­i­tive one, not on rush­ing to de­velop ma­chine in­tel­li­gence ASAP. But the name is ac­cu­rate.

In 2005, the Fu­ture of Hu­man­ity In­sti­tute at Oxford Univer­sity was founded, fol­lowed by the Cen­tre for the Study of Ex­is­ten­tial Risk at Cam­bridge Univer­sity in early 2013. FHI is do­ing good work, ri­val­ing MIRI’s and in some ways sur­pass­ing it. CSER’s an­nounced re­search area, and the rep­u­ta­tions of its founders, sug­gest that we can ex­pect good things. Com­pe­ti­tion for the sake of hu­man­ity! The more the mer­rier!

In late 2012, SI spun off the Cen­ter for Ap­plied Ra­tion­al­ity. Since 2008, much of SI’s en­er­gies, and par­tic­u­larly those of Yud­kowsky, had gone to LessWrong.com and the field of ra­tio­nal­ity. As a tac­tic to bring in smart, com­mit­ted new re­searchers and or­ga­niz­ers, this was highly suc­cess­ful, and who can ar­gue with the im­por­tance of be­ing more ra­tio­nal? But as a strat­egy for sav­ing hu­man­ity from ex­is­ten­tial AI risk, this sec­ond fo­cus was a dis­trac­tion. SI got the point, and split off CFAR.

Way to go, MIRI! So many of the crit­i­cisms I had about SI’s strate­gic di­rec­tion and its ad­minis­tra­tion in the years I first en­coun­tered it in 2005 have been re­solved re­cently.

Next step: A much much bet­ter hu­man fu­ture.

The TL;DR, con­ve­niently at the bot­tom of the ar­ti­cle to en­courage you to ac­tu­ally read it, is:

  • MIRI (formerly SIAI, SI): Work­ing to avoid ex­is­ten­tial risk from fu­ture ma­chine in­tel­li­gence, while in­creas­ing the chances of pos­i­tive outcome

  • CFAR: Train­ing in ap­plied rationality

  • CSER: Re­search to­wards avoid­ing ex­is­ten­tial risk, with fu­ture ma­chine in­tel­li­gence as a strong focus

  • FHI: Re­search­ing var­i­ous tran­shu­man­ist top­ics, but with a strong re­search pro­gram in ex­is­ten­tial risk and fu­ture ma­chine in­tel­li­gence in particular

  • SU: Teach­ing and en­courag­ing the de­vel­op­ment of next-gen­er­a­tion technologies

  • SS: An an­nual fo­rum for top geek heroes to speak on what­ever in­ter­ests them. Fa­vored top­ics in­clude so­cietal trends, next-gen sci­ence and tech­nol­ogy, and tran­shu­man­ism.