Superintelligence 29: Crunch time

This is part of a weekly read­ing group on Nick Bostrom’s book, Su­per­in­tel­li­gence. For more in­for­ma­tion about the group, and an in­dex of posts so far see the an­nounce­ment post. For the sched­ule of fu­ture top­ics, see MIRI’s read­ing guide.

Wel­come. This week we dis­cuss the twenty-ninth sec­tion in the read­ing guide: Crunch time. This cor­re­sponds to the last chap­ter in the book, and the last dis­cus­sion here (even though the read­ing guide shows a mys­te­ri­ous 30th sec­tion).

This post sum­ma­rizes the sec­tion, and offers a few rele­vant notes, and ideas for fur­ther in­ves­ti­ga­tion. Some of my own thoughts and ques­tions for dis­cus­sion are in the com­ments.

There is no need to pro­ceed in or­der through this post, or to look at ev­ery­thing. Feel free to jump straight to the dis­cus­sion. Where ap­pli­ca­ble and I re­mem­ber, page num­bers in­di­cate the rough part of the chap­ter that is most re­lated (not nec­es­sar­ily that the chap­ter is be­ing cited for the spe­cific claim).

Read­ing: Chap­ter 15


  1. As we have seen, the fu­ture of AI is com­pli­cated and un­cer­tain. So, what should we do? (p255)

  2. In­tel­lec­tual dis­cov­er­ies can be thought of as mov­ing the ar­rival of in­for­ma­tion ear­lier. For many ques­tions in math and philos­o­phy, get­ting an­swers ear­lier does not mat­ter much. Also peo­ple or ma­chines will likely be bet­ter equipped to an­swer these ques­tions in the fu­ture. For other ques­tions, e.g. about AI safety, get­ting the an­swers ear­lier mat­ters a lot. This sug­gests work­ing on the time-sen­si­tive prob­lems in­stead of the time­less prob­lems. (p255-6)

  3. We should work on pro­jects that are ro­bustly pos­i­tive value (good in many sce­nar­ios, and on many moral views)

  4. We should work on pro­jects that are elas­tic to our efforts (i.e. cost-effec­tive; high out­put per in­put)

  5. Two ob­jec­tives that seem good on these grounds: strate­gic anal­y­sis and ca­pac­ity build­ing (p257)

  6. An im­por­tant form of strate­gic anal­y­sis is the search for cru­cial con­sid­er­a­tions. (p257)

  7. Cru­cial con­sid­er­a­tion: idea with the po­ten­tial to change our views sub­stan­tially, e.g. re­vers­ing the sign of the de­sir­a­bil­ity of im­por­tant in­ter­ven­tions. (p257)

  8. An im­por­tant way of build­ing ca­pac­ity is as­sem­bling a ca­pa­ble sup­port base who take the fu­ture se­ri­ously. Th­ese peo­ple can then re­spond to new in­for­ma­tion as it arises. One key in­stan­ti­a­tion of this might be an in­formed and dis­cern­ing donor net­work. (p258)

  9. It is valuable to shape the cul­ture of the field of AI risk as it grows. (p258)

  10. It is valuable to shape the so­cial episte­mol­ogy of the AI field. For in­stance, can peo­ple re­spond to new cru­cial con­sid­er­a­tions? Is in­for­ma­tion spread and ag­gre­gated effec­tively? (p258)

  11. Other in­ter­ven­tions that might be cost-effec­tive: (p258-9)

    1. Tech­ni­cal work on ma­chine in­tel­li­gence safety

    2. Pro­mot­ing ‘best prac­tices’ among AI researchers

    3. Mis­cel­la­neous op­por­tu­ni­ties that arise, not nec­es­sar­ily closely con­nected with AI, e.g. pro­mot­ing cog­ni­tive enhancement

  12. We are like a large group of chil­dren hold­ing trig­gers to a pow­er­ful bomb: the situ­a­tion is very trou­bling, but calls for bit­ter de­ter­mi­na­tion to be as com­pe­tent as we can, on what is the most im­por­tant task fac­ing our times. (p259-60)

Another view

Alexis Madri­gal talks to An­drew Ng, chief sci­en­tist at Baidu Re­search, who does not think it is crunch time:

An­drew Ng builds ar­tifi­cial in­tel­li­gence sys­tems for a liv­ing. He taught AI at Stan­ford, built AI at Google, and then moved to the Chi­nese search en­g­ine gi­ant, Baidu, to con­tinue his work at the fore­front of ap­ply­ing ar­tifi­cial in­tel­li­gence to real-world prob­lems.

So when he hears peo­ple like Elon Musk or Stephen Hawk­ing—peo­ple who are not in­ti­mately fa­mil­iar with to­day’s tech­nolo­gies—talk­ing about the wild po­ten­tial for ar­tifi­cial in­tel­li­gence to, say, wipe out the hu­man race, you can prac­ti­cally hear him fa­cepalming.

“For those of us ship­ping AI tech­nol­ogy, work­ing to build these tech­nolo­gies now,” he told me, wearily, yes­ter­day, “I don’t see any re­al­is­tic path from the stuff we work on to­day—which is amaz­ing and cre­at­ing tons of value—but I don’t see any path for the soft­ware we write to turn evil.”

But isn’t there the po­ten­tial for these tech­nolo­gies to be­gin to cre­ate mischief in so­ciety, if not, say, ex­tinc­tion?

“Com­put­ers are be­com­ing more in­tel­li­gent and that’s use­ful as in self-driv­ing cars or speech recog­ni­tion sys­tems or search en­g­ines. That’s in­tel­li­gence,” he said. “But sen­tience and con­scious­ness is not some­thing that most of the peo­ple I talk to think we’re on the path to.”

Not all AI prac­ti­tion­ers are as san­guine about the pos­si­bil­ities of robots. Demis Hass­abis, the founder of the AI startup Deep­Mind, which was ac­quired by Google, made the cre­ation of an AI ethics board a re­quire­ment of its ac­qui­si­tion. “I think AI could be world chang­ing, it’s an amaz­ing tech­nol­ogy,” he told jour­nal­ist Steven Levy. “All tech­nolo­gies are in­her­ently neu­tral but they can be used for good or bad so we have to make sure that it’s used re­spon­si­bly. I and my cofounders have felt this for a long time.”

So, I said, sim­ply pro­ject for­ward progress in AI and the con­tinued ad­vance of Moore’s Law and as­so­ci­ated in­creases in com­put­ers speed, mem­ory size, etc. What about in 40 years, does he fore­see sen­tient AI?

“I think to get hu­man-level AI, we need sig­nifi­cantly differ­ent al­gorithms and ideas than we have now,” he said. English-to-Chi­nese ma­chine trans­la­tion sys­tems, he noted, had “read” pretty much all of the par­allel English-Chi­nese texts in the world, “way more lan­guage than any hu­man could pos­si­bly read in their life­time.” And yet they are far worse trans­la­tors than hu­mans who’ve seen a frac­tion of that data. “So that says the hu­man’s learn­ing al­gorithm is very differ­ent.”

No­tice that he didn’t ac­tu­ally an­swer the ques­tion. But he did say why he per­son­ally is not work­ing on miti­gat­ing the risks some other peo­ple fore­see in su­per­in­tel­li­gent ma­chines.

“I don’t work on pre­vent­ing AI from turn­ing evil for the same rea­son that I don’t work on com­bat­ing over­pop­u­la­tion on the planet Mars,” he said. “Hun­dreds of years from now when hope­fully we’ve colonized Mars, over­pop­u­la­tion might be a se­ri­ous prob­lem and we’ll have to deal with it. It’ll be a press­ing is­sue. There’s tons of pol­lu­tion and peo­ple are dy­ing and so you might say, ‘How can you not care about all these peo­ple dy­ing of pol­lu­tion on Mars?’ Well, it’s just not pro­duc­tive to work on that right now.”

Cur­rent AI sys­tems, Ng con­tends, are ba­sic rel­a­tive to hu­man in­tel­li­gence, even if there are things they can do that ex­ceed the ca­pa­bil­ities of any hu­man. “Maybe hun­dreds of years from now, maybe thou­sands of years from now—I don’t know—maybe there will be some AI that turn evil,” he said, “but that’s just so far away that I don’t know how to pro­duc­tively work on that.”

The big­ger worry, he noted, was the effect that in­creas­ingly smart ma­chines might have on the job mar­ket, dis­plac­ing work­ers in all kinds of fields much faster than even in­dus­tri­al­iza­tion dis­placed agri­cul­tural work­ers or au­toma­tion dis­placed fac­tory work­ers.

Surely, cre­ative in­dus­try peo­ple like my­self would be im­mune from the effects of this kind of ar­tifi­cial in­tel­li­gence, though, right?

“I feel like there is more mys­ti­cism around the no­tion of cre­ativity than is re­ally nec­es­sary,” Ng said. “Speak­ing as an ed­u­ca­tor, I’ve seen peo­ple learn to be more cre­ative. And I think that some day, and this might be hun­dreds of years from now, I don’t think that the idea of cre­ativity is some­thing that will always be be­yond the realm of com­put­ers.”

And the less we un­der­stand what a com­puter is do­ing, the more cre­ative and in­tel­li­gent it will seem. “When ma­chines have so much mus­cle be­hind them that we no longer un­der­stand how they came up with a novel move or con­clu­sion,” he con­cluded, “we will see more and more what look like sparks of brilli­ance em­a­nat­ing from ma­chines.”

An­drew Ng com­mented:

Enough thought­ful AI re­searchers (in­clud­ing Yoshua Ben­gio​, Yann LeCun) have crit­i­cized the hype about evil kil­ler robots or “su­per­in­tel­li­gence,” that I hope we can fi­nally lay that ar­gu­ment to rest. This ar­ti­cle sum­ma­rizes why I don’t cur­rently spend my time work­ing on pre­vent­ing AI from turn­ing evil.


1. Replaceability

‘Re­place­abil­ity’ is the gen­eral is­sue of the work that you do pro­duc­ing some com­pli­cated coun­ter­fac­tual re­ar­range­ment of differ­ent peo­ple work­ing on differ­ent things at differ­ent times. For in­stance, if you solve a math ques­tion, this means it gets solved some­what ear­lier and also some­one else in the fu­ture does some­thing else in­stead, which some­one else might have done, etc. For a much more ex­ten­sive ex­pla­na­tion of how to think about re­place­abil­ity, see 80,000 Hours. They also link to some of the other dis­cus­sion of the is­sue within Effec­tive Altru­ism (a move­ment in­ter­ested in effi­ciently im­prov­ing the world, thus nat­u­rally in­ter­ested in AI risk and the nu­ances of eval­u­at­ing im­pact).

2. When should differ­ent AI safety work be done?

For more dis­cus­sion of timing of work on AI risks, see Ord 2014. I’ve also writ­ten a bit about what should be pri­ori­tized early.

3. Review

If you’d like to quickly re­view the en­tire book at this point, Amanda House has a sum­mary here, in­clud­ing this handy di­a­gram among oth­ers:

4. What to do?

If you are con­vinced that AI risk is an im­por­tant pri­or­ity, and want some more con­crete ways to be in­volved, here are some peo­ple work­ing on it: FHI, FLI, CSER, GCRI, MIRI, AI Im­pacts (note: I’m in­volved with the last two). You can also do in­de­pen­dent re­search from many aca­demic fields, some of which I have pointed out in ear­lier weeks. Here is my list of pro­jects and of other lists of pro­jects. You could also de­velop ex­per­tise in AI or AI safety (MIRI has a guide to as­pects re­lated to their re­search here; all of the afore­men­tioned or­ga­ni­za­tions have writ­ings). You could also work on im­prov­ing hu­man­ity’s ca­pac­ity to deal with such prob­lems. Cog­ni­tive en­hance­ment is one ex­am­ple. Among peo­ple I know, im­prov­ing in­di­vi­d­ual ra­tio­nal­ity and im­prov­ing the effec­tive­ness of the philan­thropic sec­tor are also pop­u­lar. I think there are many other plau­si­ble di­rec­tions. This has not been a com­pre­hen­sive list of things you could do, and think­ing more about what to do on your own is also prob­a­bly a good op­tion.

In-depth investigations

If you are par­tic­u­larly in­ter­ested in these top­ics, and want to do fur­ther re­search, these are a few plau­si­ble di­rec­tions, some in­spired by Luke Muehlhauser’s list, which con­tains many sug­ges­tions re­lated to parts of Su­per­in­tel­li­gence. Th­ese pro­jects could be at­tempted at var­i­ous lev­els of depth.

  1. What should be done about AI risk? Are there im­por­tant things that none of the cur­rent or­ga­ni­za­tions are work­ing on?

  2. What work is im­por­tant to do now, and what work should be deferred?

  3. What forms of ca­pa­bil­ity im­prove­ment are most use­ful for nav­i­gat­ing AI risk?

    If you are in­ter­ested in any­thing like this, you might want to men­tion it in the com­ments, and see whether other peo­ple have use­ful thoughts.

    How to proceed

    This has been a col­lec­tion of notes on the chap­ter. The most im­por­tant part of the read­ing group though is dis­cus­sion, which is in the com­ments sec­tion. I pose some ques­tions for you there, and I in­vite you to add your own. Please re­mem­ber that this group con­tains a va­ri­ety of lev­els of ex­per­tise: if a line of dis­cus­sion seems too ba­sic or too in­com­pre­hen­si­ble, look around for one that suits you bet­ter!

    This is the last read­ing group, so how to pro­ceed is up to you, even more than usu­ally. Thanks for join­ing us!