Superintelligence 5: Forms of Superintelligence

This is part of a weekly read­ing group on Nick Bostrom’s book, Su­per­in­tel­li­gence. For more in­for­ma­tion about the group, and an in­dex of posts so far see the an­nounce­ment post. For the sched­ule of fu­ture top­ics, see MIRI’s read­ing guide.


Wel­come. This week we dis­cuss the fifth sec­tion in the read­ing guide: Forms of su­per­in­tel­li­gence. This cor­re­sponds to Chap­ter 3, on differ­ent ways in which an in­tel­li­gence can be su­per.

This post sum­ma­rizes the sec­tion, and offers a few rele­vant notes, and ideas for fur­ther in­ves­ti­ga­tion. Some of my own thoughts and ques­tions for dis­cus­sion are in the com­ments.

There is no need to pro­ceed in or­der through this post, or to look at ev­ery­thing. Feel free to jump straight to the dis­cus­sion. Where ap­pli­ca­ble and I re­mem­ber, page num­bers in­di­cate the rough part of the chap­ter that is most re­lated (not nec­es­sar­ily that the chap­ter is be­ing cited for the spe­cific claim).

Read­ing: Chap­ter 3 (p52-61)


Summary

  1. A speed su­per­in­tel­li­gence could do what a hu­man does, but faster. This would make the out­side world seem very slow to it. It might cope with this par­tially by be­ing very tiny, or vir­tual. (p53)

  2. A col­lec­tive su­per­in­tel­li­gence is com­posed of smaller in­tel­lects, in­ter­act­ing in some way. It is es­pe­cially good at tasks that can be bro­ken into parts and com­pleted in par­allel. It can be im­proved by adding more smaller in­tel­lects, or by or­ga­niz­ing them bet­ter. (p54)

  3. A qual­ity su­per­in­tel­li­gence can carry out in­tel­lec­tual tasks that hu­mans just can’t in prac­tice, with­out nec­es­sar­ily be­ing bet­ter or faster at the things hu­mans can do. This can be un­der­stood by anal­ogy with the differ­ence be­tween other an­i­mals and hu­mans, or the differ­ence be­tween hu­mans with and with­out cer­tain cog­ni­tive ca­pa­bil­ities. (p56-7)

  4. Th­ese differ­ent kinds of su­per­in­tel­li­gence are es­pe­cially good at differ­ent kinds of tasks. We might say they have differ­ent ‘di­rect reach’. Ul­ti­mately they could all lead to one an­other, so can in­di­rectly carry out the same tasks. We might say their ‘in­di­rect reach’ is the same. (p58-9)

  5. We don’t know how smart it is pos­si­ble for a biolog­i­cal or a syn­thetic in­tel­li­gence to be. Nonethe­less we can be con­fi­dent that syn­thetic en­tities can be much more in­tel­li­gent than biolog­i­cal en­tities.

    1. Digi­tal in­tel­li­gences would have bet­ter hard­ware: they would be made of com­po­nents ten mil­lion times faster than neu­rons; the com­po­nents could com­mu­ni­cate about two mil­lion times faster than neu­rons can; they could use many more com­po­nents while our brains are con­strained to our skulls; it looks like bet­ter mem­ory should be fea­si­ble; and they could be built to be more re­li­able, long-last­ing, flex­ible, and well suited to their en­vi­ron­ment.

    2. Digi­tal in­tel­li­gences would have bet­ter soft­ware: they could be cheaply and non-de­struc­tively ‘ed­ited’; they could be du­pli­cated ar­bi­trar­ily; they could have well al­igned goals as a re­sult of this du­pli­ca­tion; they could share mem­o­ries (at least for some forms of AI); and they could have pow­er­ful ded­i­cated soft­ware (like our vi­sion sys­tem) for do­mains where we have to rely on slow gen­eral rea­son­ing.

Notes

  1. This chap­ter is about differ­ent kinds of su­per­in­tel­li­gent en­tities that could ex­ist. I like to think about the closely re­lated ques­tion, ‘what kinds of bet­ter can in­tel­li­gence be?’ You can be a bet­ter baker if you can bake a cake faster, or bake more cakes, or bake bet­ter cakes. Similarly, a sys­tem can be­come more in­tel­li­gent if it can do the same in­tel­li­gent things faster, or if it does things that are qual­i­ta­tively more in­tel­li­gent. (Col­lec­tive in­tel­li­gence seems some­what differ­ent, in that it ap­pears to be a means to be faster or able to do bet­ter things, though it may have benefits in di­men­sions I’m not think­ing of.) I think the chap­ter is get­ting at differ­ent ways in­tel­li­gence can be bet­ter rather than ‘forms’ in gen­eral, which might vary on many other di­men­sions (e.g. em­u­la­tion vs AI, goal di­rected vs. re­flex­ive, nice vs. nasty).

  2. Some of the hard­ware and soft­ware ad­van­tages men­tioned would be pretty trans­for­ma­tive on their own. If you haven’t be­fore, con­sider tak­ing a mo­ment to think about what the world would be like if peo­ple could be cheaply and perfectly repli­cated, with their skills in­tact. Or if peo­ple could live ar­bi­trar­ily long by re­plac­ing worn com­po­nents.

  3. The main differ­ences be­tween in­creas­ing in­tel­li­gence of a sys­tem via speed and via col­lec­tive­ness seem to be: (1) the ‘col­lec­tive’ route re­quires that you can break up the task into par­alleliz­able sub­tasks, (2) it gen­er­ally has larger costs from com­mu­ni­ca­tion be­tween those sub­parts, and (3) it can’t pro­duce a sin­gle unit as fast as a com­pa­rable ‘speed-based’ sys­tem. This sug­gests that any­thing a col­lec­tive in­tel­li­gence can do, a com­pa­rable speed in­tel­li­gence can do at least as well. One coun­terex­am­ple to this I can think of is that of­ten groups in­clude peo­ple with a di­ver­sity of knowl­edge and ap­proaches, and so the group can do a lot more pro­duc­tive think­ing than a sin­gle per­son could. It seems wrong to count this as a virtue of col­lec­tive in­tel­li­gence in gen­eral how­ever, since you could also have a sin­gle fast sys­tem with varied ap­proaches at differ­ent times.

  4. For each task, we can think of curves for how perfor­mance in­creases as we in­crease in­tel­li­gence in these differ­ent ways. For in­stance, take the task of find­ing a fact on the in­ter­net quickly. It seems to me that a per­son who ran at 10x speed would get the figure 10x faster. Ten times as many peo­ple work­ing in par­allel would do it only a bit faster than one, de­pend­ing on the var­i­ance of their in­di­vi­d­ual perfor­mance, and whether they found some clever way to com­ple­ment each other. It’s not ob­vi­ous how to mul­ti­ply qual­i­ta­tive in­tel­li­gence by a par­tic­u­lar fac­tor, es­pe­cially as there are differ­ent ways to im­prove the qual­ity of a sys­tem. It also seems non-ob­vi­ous to me how search speed would scale with a par­tic­u­lar mea­sure such as IQ.

  5. How much more in­tel­li­gent do hu­man sys­tems get as we add more hu­mans? I can’t find much of an an­swer, but peo­ple have in­ves­ti­gated the effect of things like team size, city size, and sci­en­tific col­lab­o­ra­tion on var­i­ous mea­sures of pro­duc­tivity.

  6. The things we might think of as col­lec­tive in­tel­li­gences—e.g. com­pa­nies, gov­ern­ments, aca­demic fields—seem no­table to me for be­ing slow-mov­ing, rel­a­tive to their com­po­nents. If some­one were to steal some chew­ing gum from Tar­get, Tar­get can re­spond in the sense that an em­ployee can try to stop them. And this is no slower than an in­di­vi­d­ual hu­man act­ing to stop their chew­ing gum from be­ing taken. How­ever it also doesn’t in­volve any ex­tra prob­lem-solv­ing from the or­ga­ni­za­tion—to the ex­tent that the or­ga­ni­za­tion’s in­tel­li­gence goes into the is­sue, it has to have already done the think­ing ahead of time. Tar­get was prob­a­bly much smarter than an in­di­vi­d­ual hu­man about set­ting up the pro­ce­dures and the in­cen­tives to have a per­son there ready to re­spond quickly and effec­tively, but that might have hap­pened over months or years.

In-depth investigations

If you are par­tic­u­larly in­ter­ested in these top­ics, and want to do fur­ther re­search, these are a few plau­si­ble di­rec­tions, some in­spired by Luke Muehlhauser’s list, which con­tains many sug­ges­tions re­lated to parts of Su­per­in­tel­li­gence. Th­ese pro­jects could be at­tempted at var­i­ous lev­els of depth.

  1. Pro­duce im­proved mea­sures of (sub­strate-in­de­pen­dent) gen­eral in­tel­li­gence. Build on the ideas of Legg, Yud­kowsky, Go­ertzel, Her­nan­dez-Orallo & Dowe, etc. Differ­en­ti­ate in­tel­li­gence qual­ity from speed.

  2. List some fea­si­ble but non-re­al­ized cog­ni­tive tal­ents for hu­mans, and ex­plore what could be achieved if they were given to some hu­mans.

  3. List and ex­am­ine some types of prob­lems bet­ter solved by a speed su­per­in­tel­li­gence than by a col­lec­tive su­per­in­tel­li­gence, and vice versa. Also, what are the re­turns on “more brains ap­plied to the prob­lem” (col­lec­tive in­tel­li­gence) for var­i­ous prob­lems? If there were merely a huge num­ber of hu­man-level agents added to the econ­omy, how much would it speed up eco­nomic growth, tech­nolog­i­cal progress, or other rele­vant met­rics? If there were a large num­ber of re­searchers added to the field of AI, how would it change progress?

  4. How does in­tel­li­gence qual­ity im­prove perfor­mance on eco­nom­i­cally rele­vant tasks?

    If you are in­ter­ested in any­thing like this, you might want to men­tion it in the com­ments, and see whether other peo­ple have use­ful thoughts.

    How to proceed

    This has been a col­lec­tion of notes on the chap­ter. The most im­por­tant part of the read­ing group though is dis­cus­sion, which is in the com­ments sec­tion. I pose some ques­tions for you there, and I in­vite you to add your own. Please re­mem­ber that this group con­tains a va­ri­ety of lev­els of ex­per­tise: if a line of dis­cus­sion seems too ba­sic or too in­com­pre­hen­si­ble, look around for one that suits you bet­ter!

    Next week, we will talk about ‘in­tel­li­gence ex­plo­sion ki­net­ics’, a topic at the cen­ter of much con­tem­po­rary de­bate over the ar­rival of ma­chine in­tel­li­gence. To pre­pare, read Chap­ter 4, The ki­net­ics of an in­tel­li­gence ex­plo­sion (p62-77). The dis­cus­sion will go live at 6pm Pa­cific time next Mon­day 20 Oc­to­ber. Sign up to be no­tified here.