Superintelligence Reading Group—Section 1: Past Developments and Present Capabilities

This is part of a weekly read­ing group on Nick Bostrom’s book, Su­per­in­tel­li­gence. For more in­for­ma­tion about the group, see the an­nounce­ment post. For the sched­ule of fu­ture top­ics, see MIRI’s read­ing guide.


Wel­come to the Su­per­in­tel­li­gence read­ing group. This week we dis­cuss the first sec­tion in the read­ing guide, Past de­vel­op­ments and pre­sent ca­pa­bil­ities. This sec­tion con­sid­ers the be­hav­ior of the econ­omy over very long time scales, and the re­cent his­tory of ar­tifi­cial in­tel­li­gence (hence­forth, ‘AI’). Th­ese two ar­eas are ex­cel­lent back­ground if you want to think about large eco­nomic tran­si­tions caused by AI.

This post sum­ma­rizes the sec­tion, and offers a few rele­vant notes, thoughts, and ideas for fur­ther in­ves­ti­ga­tion. My own thoughts and ques­tions for dis­cus­sion are in the com­ments.

There is no need to pro­ceed in or­der through this post. Feel free to jump straight to the dis­cus­sion. Where ap­pli­ca­ble, page num­bers in­di­cate the rough part of the chap­ter that is most re­lated (not nec­es­sar­ily that the chap­ter is be­ing cited for the spe­cific claim).

Read­ing: Fore­word, and Growth modes through State of the art from Chap­ter 1 (p1-18)


Summary

Eco­nomic growth:

  1. Eco­nomic growth has be­come rad­i­cally faster over the course of hu­man his­tory. (p1-2)

  2. This growth has been un­even rather than con­tin­u­ous, per­haps cor­re­spond­ing to the farm­ing and in­dus­trial rev­olu­tions. (p1-2)

  3. Thus his­tory sug­gests large changes in the growth rate of the econ­omy are plau­si­ble. (p2)

  4. This makes it more plau­si­ble that hu­man-level AI will ar­rive and pro­duce un­prece­dented lev­els of eco­nomic pro­duc­tivity.

  5. Pre­dic­tions of much faster growth rates might also sug­gest the ar­rival of ma­chine in­tel­li­gence, be­cause it is hard to imag­ine hu­mans—slow as they are—sus­tain­ing such a rapidly grow­ing econ­omy. (p2-3)

  6. Thus eco­nomic his­tory sug­gests that rapid growth caused by AI is more plau­si­ble than you might oth­er­wise think.

The his­tory of AI:

  1. Hu­man-level AI has been pre­dicted since the 1940s. (p3-4)

  2. Early pre­dic­tions were of­ten op­ti­mistic about when hu­man-level AI would come, but rarely con­sid­ered whether it would pose a risk. (p4-5)

  3. AI re­search has been through sev­eral cy­cles of rel­a­tive pop­u­lar­ity and un­pop­u­lar­ity. (p5-11)

  4. By around the 1990s, ‘Good Old-Fash­ioned Ar­tifi­cial In­tel­li­gence’ (GOFAI) tech­niques based on sym­bol ma­nipu­la­tion gave way to new meth­ods such as ar­tifi­cial neu­ral net­works and ge­netic al­gorithms. Th­ese are widely con­sid­ered more promis­ing, in part be­cause they are less brit­tle and can learn from ex­pe­rience more use­fully. Re­searchers have also lately de­vel­oped a bet­ter un­der­stand­ing of the un­der­ly­ing math­e­mat­i­cal re­la­tion­ships be­tween var­i­ous mod­ern ap­proaches. (p5-11)

  5. AI is very good at play­ing board games. (12-13)

  6. AI is used in many ap­pli­ca­tions to­day (e.g. hear­ing aids, route-fin­ders, recom­mender sys­tems, med­i­cal de­ci­sion sup­port sys­tems, ma­chine trans­la­tion, face recog­ni­tion, schedul­ing, the fi­nan­cial mar­ket). (p14-16)

  7. In gen­eral, tasks we thought were in­tel­lec­tu­ally de­mand­ing (e.g. board games) have turned out to be easy to do with AI, while tasks which seem easy to us (e.g. iden­ti­fy­ing ob­jects) have turned out to be hard. (p14)

  8. An ‘op­ti­mal­ity no­tion’ is the com­bi­na­tion of a rule for learn­ing, and a rule for mak­ing de­ci­sions. Bostrom de­scribes one of these: a kind of ideal Bayesian agent. This is im­pos­si­ble to ac­tu­ally make, but pro­vides a use­ful mea­sure for judg­ing im­perfect agents against. (p10-11)

Notes on a few things

  1. What is ‘su­per­in­tel­li­gence’? (p22 spoiler)
    In case you are too cu­ri­ous about what the topic of this book is to wait un­til week 3, a ‘su­per­in­tel­li­gence’ will soon be de­scribed as ‘any in­tel­lect that greatly ex­ceeds the cog­ni­tive perfor­mance of hu­mans in vir­tu­ally all do­mains of in­ter­est’. Vague­ness in this defi­ni­tion will be cleared up later.

  2. What is ‘AI’?
    In par­tic­u­lar, how does ‘AI’ differ from other com­puter soft­ware? The line is blurry, but ba­si­cally AI re­search seeks to repli­cate the use­ful ‘cog­ni­tive’ func­tions of hu­man brains (‘cog­ni­tive’ is per­haps un­clear, but for in­stance it doesn’t have to be squishy or pre­vent your head from im­plod­ing). Some­times AI re­search tries to copy the meth­ods used by hu­man brains. Other times it tries to carry out the same broad func­tions as a hu­man brain, per­haps bet­ter than a hu­man brain. Rus­sell and Norvig (p2) di­vide pre­vailing defi­ni­tions of AI into four cat­e­gories: ‘think­ing hu­manly’, ‘think­ing ra­tio­nally’, ‘act­ing hu­manly’ and ‘act­ing ra­tio­nally’. For our pur­poses how­ever, the dis­tinc­tion is prob­a­bly not too im­por­tant.

  3. What is ‘hu­man-level’ AI?
    We are go­ing to talk about ‘hu­man-level’ AI a lot, so it would be good to be clear on what that is. Un­for­tu­nately the term is used in var­i­ous ways, and of­ten am­bigu­ously. So we prob­a­bly can’t be that clear on it, but let us at least be clear on how the term is un­clear.

    One big am­bi­guity is whether you are talk­ing about a ma­chine that can carry out tasks as well as a hu­man at any price, or a ma­chine that can carry out tasks as well as a hu­man at the price of a hu­man. Th­ese are quite differ­ent, es­pe­cially in their im­me­di­ate so­cial im­pli­ca­tions.

    Other am­bi­gui­ties arise in how ‘lev­els’ are mea­sured. If AI sys­tems were to re­place al­most all hu­mans in the econ­omy, but only be­cause they are so much cheaper—though they of­ten do a lower qual­ity job—are they hu­man level? What ex­actly does the AI need to be hu­man-level at? Any­thing you can be paid for? Any­thing a hu­man is good for? Just men­tal tasks? Even men­tal tasks like day­dream­ing? Which or how many hu­mans does the AI need to be the same level as? Note that in a sense most hu­mans have been re­placed in their jobs be­fore (al­most ev­ery­one used to work in farm­ing), so if you use that met­ric for hu­man-level AI, it was reached long ago, and per­haps farm ma­chin­ery is hu­man-level AI. This is prob­a­bly not what we want to point at.

    Another thing to be aware of is the di­ver­sity of men­tal skills. If by ‘hu­man-level’ we mean a ma­chine that is at least as good as a hu­man at each of these skills, then in prac­tice the first ‘hu­man-level’ ma­chine will be much bet­ter than a hu­man on many of those skills. It may not seem ‘hu­man-level’ so much as ‘very su­per-hu­man’.

    We could in­stead think of hu­man-level as closer to ‘com­pet­i­tive with a hu­man’ - where the ma­chine has some su­per-hu­man tal­ents and lacks some skills hu­mans have. This is not usu­ally used, I think be­cause it is hard to define in a mean­ingful way. There are already ma­chines for which a com­pany is will­ing to pay more than a hu­man: in this sense a micro­scope might be ‘su­per-hu­man’. There is no rea­son for a ma­chine which is equal in value to a hu­man to have the traits we are in­ter­ested in talk­ing about here, such as agency, su­pe­rior cog­ni­tive abil­ities or the ten­dency to drive hu­mans out of work and shape the fu­ture. Thus we talk about AI which is at least as good as a hu­man, but you should be­ware that the pre­dic­tions made about such an en­tity may ap­ply be­fore the en­tity is tech­ni­cally ‘hu­man-level’.


    Ex­am­ple of how the first ‘hu­man-level’ AI may sur­pass hu­mans in many ways.

    Be­cause of these am­bi­gui­ties, AI re­searchers are some­times hes­i­tant to use the term. e.g. in these in­ter­views.

  4. Growth modes (p1)
    Robin Han­son wrote the sem­i­nal pa­per on this is­sue. Here’s a figure from it, show­ing the step changes in growth rates. Note that both axes are log­a­r­ith­mic. Note also that the changes be­tween modes don’t hap­pen overnight. Ac­cord­ing to Robin’s model, we are still tran­si­tion­ing into the in­dus­trial era (p10 in his pa­per).

  5. What causes these tran­si­tions be­tween growth modes? (p1-2)
    One might be hap­pier mak­ing pre­dic­tions about fu­ture growth mode changes if one had a unify­ing ex­pla­na­tion for the pre­vi­ous changes. As far as I know, we have no good idea of what was so spe­cial about those two pe­ri­ods. There are many sug­gested causes of the in­dus­trial rev­olu­tion, but noth­ing un­con­tro­ver­sially stands out as ‘twice in his­tory’ level of spe­cial. You might think the small num­ber of dat­a­points would make this puz­zle too hard. Re­mem­ber how­ever that there are quite a lot of nega­tive dat­a­points—you need an ex­pla­na­tion that didn’t hap­pen at all of the other times in his­tory.

  6. Growth of growth
    It is also in­ter­est­ing to com­pare world eco­nomic growth to the to­tal size of the world econ­omy. For the last few thou­sand years, the econ­omy seems to have grown faster more or less in pro­por­tion to it’s size (see figure be­low). Ex­trap­o­lat­ing such a trend would lead to an in­finite econ­omy in finite time. In fact for the thou­sand years un­til 1950 such ex­trap­o­la­tion would place an in­finite econ­omy in the late 20th Cen­tury! The time since 1950 has been strange ap­par­ently.

    (Figure from here)

  7. Early AI pro­grams men­tioned in the book (p5-6)
    You can see them in ac­tion: SHRDLU, Shakey, Gen­eral Prob­lem Solver (not quite in ac­tion), ELIZA.

  8. Later AI pro­grams men­tioned in the book (p6)
    Al­gorith­mi­cally gen­er­ated Beethoven, al­gorith­mic gen­er­a­tion of patentable in­ven­tions, ar­tifi­cial com­edy (re­quires down­load).

  9. Modern AI al­gorithms men­tioned (p7-8, 14-15)
    Here is a neu­ral net­work do­ing image recog­ni­tion. Here is ar­tifi­cial evolu­tion of jump­ing and of toy cars. Here is a face de­tec­tion demo that can tell you your at­trac­tive­ness (ap­par­ently not re­li­ably), hap­piness, age, gen­der, and which celebrity it mis­takes you for.

  10. What is max­i­mum like­li­hood es­ti­ma­tion? (p9)
    Bostrom points out that many types of ar­tifi­cial neu­ral net­work can be viewed as clas­sifiers that perform ‘max­i­mum like­li­hood es­ti­ma­tion’. If you haven’t come across this term be­fore, the idea is to find the situ­a­tion that would make your ob­ser­va­tions most prob­a­ble. For in­stance, sup­pose a per­son writes to you and tells you that you have won a car. The situ­a­tion that would have made this sce­nario most prob­a­ble is the one where you have won a car, since in that case you are al­most guaran­teed to be told about it. Note that this doesn’t im­ply that you should think you won a car, if some­one tells you that. Be­ing the tar­get of a spam email might only give you a low prob­a­bil­ity of be­ing told that you have won a car (a spam email may in­stead ad­vise you of prod­ucts, or tell you that you have won a boat), but spam emails are so much more com­mon than ac­tu­ally win­ning cars that most of the time if you get such an email, you will not have won a car. If you would like a bet­ter in­tu­ition for max­i­mum like­li­hood es­ti­ma­tion, Wolfram Alpha has sev­eral demon­stra­tions (re­quires free down­load).

  11. What are hill climb­ing al­gorithms like? (p9)
    The sec­ond large class of al­gorithms Bostrom men­tions are hill climb­ing al­gorithms. The idea here is fairly straight­for­ward, but if you would like a bet­ter ba­sic in­tu­ition for what hill climb­ing looks like, Wolfram Alpha has a demon­stra­tion to play with (re­quires free down­load).

In-depth investigations

If you are par­tic­u­larly in­ter­ested in these top­ics, and want to do fur­ther re­search, these are a few plau­si­ble di­rec­tions:

  1. How have in­vest­ments into AI changed over time? Here’s a start, es­ti­mat­ing the size of the field.

  2. What does progress in AI look like in more de­tail? What can we in­fer from it? I wrote about al­gorith­mic im­prove­ment curves be­fore. If you are in­ter­ested in plau­si­ble next steps here, ask me.

  3. What do eco­nomic mod­els tell us about the con­se­quences of hu­man-level AI? Here is some such think­ing; Eliezer Yud­kowsky has writ­ten at length about his re­quest for more.

How to proceed

This has been a col­lec­tion of notes on the chap­ter. The most im­por­tant part of the read­ing group though is dis­cus­sion, which is in the com­ments sec­tion. I pose some ques­tions for you there, and I in­vite you to add your own. Please re­mem­ber that this group con­tains a va­ri­ety of lev­els of ex­per­tise: if a line of dis­cus­sion seems too ba­sic or too in­com­pre­hen­si­ble, look around for one that suits you bet­ter!

Next week, we will talk about what AI re­searchers think about hu­man-level AI: when it will ar­rive, what it will be like, and what the con­se­quences will be. To pre­pare, read Opinions about the fu­ture of ma­chine in­tel­li­gence from Chap­ter 1 and also When Will AI Be Created? by Luke Muehlhauser. The dis­cus­sion will go live at 6pm Pa­cific time next Mon­day 22 Septem­ber. Sign up to be no­tified here.