Superintelligence 27: Pathways and enablers

This is part of a weekly read­ing group on Nick Bostrom’s book, Su­per­in­tel­li­gence. For more in­for­ma­tion about the group, and an in­dex of posts so far see the an­nounce­ment post. For the sched­ule of fu­ture top­ics, see MIRI’s read­ing guide.


Wel­come. This week we dis­cuss the twenty-sev­enth sec­tion in the read­ing guide: Path­ways and en­ablers.

This post sum­ma­rizes the sec­tion, and offers a few rele­vant notes, and ideas for fur­ther in­ves­ti­ga­tion. Some of my own thoughts and ques­tions for dis­cus­sion are in the com­ments.

There is no need to pro­ceed in or­der through this post, or to look at ev­ery­thing. Feel free to jump straight to the dis­cus­sion. Where ap­pli­ca­ble, page num­bers in­di­cate the rough part of the chap­ter that is most re­lated (not nec­es­sar­ily that the chap­ter is be­ing cited for the spe­cific claim).

Read­ing: “Path­ways and en­ablers” from Chap­ter 14


Summary

  1. Is hard­ware progress good?

    1. Hard­ware progress means ma­chine in­tel­li­gence will ar­rive sooner, which is prob­a­bly bad.

    2. More hard­ware at a given point means less un­der­stand­ing is likely to be needed to build ma­chine in­tel­li­gence, and brute-force tech­niques are more likely to be used. Th­ese prob­a­bly in­crease dan­ger.

    3. More hard­ware progress sug­gests there will be more hard­ware over­hang when ma­chine in­tel­li­gence is de­vel­oped, and thus a faster in­tel­li­gence ex­plo­sion. This seems good inas­much as it brings a higher chance of a sin­gle­ton, but bad in other ways:

      1. Less op­por­tu­nity to re­spond dur­ing the transition

      2. Less pos­si­bil­ity of con­strain­ing how much hard­ware an AI can reach

      3. Flat­tens the play­ing field, al­low­ing small pro­jects a bet­ter chance. Th­ese are less likely to be safety-con­scious.

    4. Hard­ware has other in­di­rect effects, e.g. it al­lowed the in­ter­net, which con­tributes sub­stan­tially to work like this. But per­haps we have enough hard­ware now for such things.

    5. On bal­ance, more hard­ware seems bad, on the im­per­sonal per­spec­tive.

  2. Would brain em­u­la­tion be a good thing to hap­pen?

    1. Brain em­u­la­tion is cou­pled with ‘neu­ro­mor­phic’ AI: if we try to build the former, we may get the lat­ter. This is prob­a­bly bad.

    2. If we achieved brain em­u­la­tions, would this be safer than AI? Three pu­ta­tive benefits:

      1. “The perfor­mance of brain em­u­la­tions is bet­ter un­der­stood”

        1. How­ever we have less idea how mod­ified em­u­la­tions would behave

        2. Also, AI can be care­fully de­signed to be understood

      2. “Emu­la­tions would in­herit hu­man val­ues”

        1. This might re­quire higher fidelity than mak­ing an eco­nom­i­cally func­tional agent

        2. Hu­mans are not that nice, of­ten. It’s not clear that hu­man na­ture is a de­sir­able tem­plate.

      3. “Emu­la­tions might pro­duce a slower take-off”

        1. It isn’t clear why it would be slower. Per­haps em­u­la­tions would be less effi­cient, and so there would be less hard­ware over­hang. Or per­haps be­cause em­u­la­tions would not be qual­i­ta­tively much bet­ter than hu­mans, just faster and more pop­u­lous of them

        2. A slower take­off may lead to bet­ter control

        3. How­ever it also means more chance of a mul­ti­po­lar out­come, and that seems bad.

    3. If brain em­u­la­tions are de­vel­oped be­fore AI, there may be a sec­ond tran­si­tion to AI later.

      1. A sec­ond tran­si­tion should be less ex­plo­sive, be­cause em­u­la­tions are already many and fast rel­a­tive to the new AI.

      2. The con­trol prob­lem is prob­a­bly eas­ier if the cog­ni­tive differ­ences are smaller be­tween the con­trol­ling en­tities and the AI.

      3. If em­u­la­tions are smarter than hu­mans, this would have some of the same benefits as cog­ni­tive en­hance­ment, in the sec­ond tran­si­tion.

      4. Emu­la­tions would ex­tend the lead of the fron­trun­ner in de­vel­op­ing em­u­la­tion tech­nol­ogy, po­ten­tially al­low­ing that group to de­velop AI with lit­tle dis­tur­bance from oth­ers.

      5. On bal­ance, brain em­u­la­tion prob­a­bly re­duces the risk from the first tran­si­tion, but added to a sec­ond tran­si­tion this is un­clear.

    4. Pro­mot­ing brain em­u­la­tion is bet­ter if:

      1. You are pes­simistic about hu­man re­s­olu­tion of con­trol problem

      2. You are less con­cerned about neu­ro­mor­phic AI, a sec­ond tran­si­tion, and mul­ti­po­lar outcomes

      3. You ex­pect the timing of brain em­u­la­tions and AI de­vel­op­ment to be close

      4. You pre­fer su­per­in­tel­li­gence to ar­rive nei­ther very early nor very late

  3. The per­son af­fect­ing per­spec­tive fa­vors speed: pre­sent peo­ple are at risk of dy­ing in the next cen­tury, and may be saved by ad­vanced technology

Another view

I talked to Kenzi Amodei about her thoughts on this sec­tion. Here is a sum­mary of her dis­agree­ments:

Bostrom ar­gues that we prob­a­bly shouldn’t cel­e­brate ad­vances in com­puter hard­ware. This seems prob­a­bly right, but here are counter-con­sid­er­a­tions to a cou­ple of his ar­gu­ments.

The great filter

A big rea­son Bostrom finds fast hard­ware progress to be broadly un­de­sir­able is that he judges the state risks from sit­ting around in our pre-AI situ­a­tion to be low, rel­a­tive to the step risk from AI. But the so called ‘Great Filter’ gives us rea­son to ques­tion this as­sess­ment.

The ar­gu­ment goes like this. Ob­serve that there are a lot of stars (we can de­tect about ~10^22 of them). Next, note that we have never seen any alien civ­i­liza­tions, or dis­tant sug­ges­tions of them. There might be aliens out there some­where, but they cer­tainly haven’t gone out and colonized the uni­verse enough that we would no­tice them (see ‘The Eerie Silence’ for fur­ther dis­cus­sion of how we might ob­serve aliens).

This im­plies that some­where on the path be­tween a star ex­ist­ing, and it be­ing home to a civ­i­liza­tion that ven­tures out and colonizes much of space, there is a ‘Great Filter’: at least one step that is hard to get past. 1/​10^22 hard to get past. We know of some­what hard steps at the start: a star might not have planets, or the planets may not be suit­able for life. We don’t know how hard it is for life to start: this step could be most of the filter for all we know.

If the filter is a step we have passed, there is noth­ing to worry about. But if it is a step in our fu­ture, then prob­a­bly we will fail at it, like ev­ery­one else. And things that stop us from visi­bly coloniz­ing the stars are may well be ex­is­ten­tial risks.

At least one way of un­der­stand­ing an­thropic rea­son­ing sug­gests the filter is much more likely to be at a step in our fu­ture. Put sim­ply, one is much more likely to find one­self in our cur­rent situ­a­tion if be­ing kil­led off on the way here is un­likely.

So what could this filter be? One thing we know is that it prob­a­bly isn’t AI risk, at least of the pow­er­ful, tile-the-uni­verse-with-op­ti­mal-com­pu­ta­tions, sort that Bostrom de­scribes. A rogue sin­gle­ton coloniz­ing the uni­verse would be just as visi­ble as its alien fore­bears coloniz­ing the uni­verse. From the per­spec­tive of the Great Filter, ei­ther one would be a ‘suc­cess’. But there are no suc­cesses that we can see.

What’s more, if we ex­pect to be fairly safe once we have a suc­cess­ful su­per­in­tel­li­gent sin­gle­ton, then this points at risks aris­ing be­fore AI.

So over­all this ar­gu­ment sug­gests that AI is less con­cern­ing than we think and that other risks (es­pe­cially early ones) are more con­cern­ing than we think. It also sug­gests that AI is harder than we think.

Which means that if we buy this ar­gu­ment, we should put a lot more weight on the cat­e­gory of ‘ev­ery­thing else’, and es­pe­cially the bits of it that come be­fore AI. To the ex­tent that known risks like biotech­nol­ogy and ecolog­i­cal de­struc­tion don’t seem plau­si­ble, we should more fear un­known un­knowns that we aren’t even prepar­ing for.

How much progress is enough?

Bostrom points to pos­i­tive changes hard­ware has made to so­ciety so far. For in­stance, hard­ware al­lowed per­sonal com­put­ers, bring­ing the in­ter­net, and with it the ac­cre­tion of an AI risk com­mu­nity, pro­duc­ing the ideas in Su­per­in­tel­li­gence. But then he says prob­a­bly we have enough: “hard­ware is already good enough for a great many ap­pli­ca­tions that could fa­cil­i­tate hu­man com­mu­ni­ca­tion and de­liber­a­tion, and it is not clear that the pace of progress in these ar­eas is strongly bot­tle­necked by the rate of hard­ware im­prove­ment.”

This seems in­tu­itively plau­si­ble. How­ever one could prob­a­bly have er­ro­neously made such as­sess­ments in all kinds of progress, all over his­tory. Ac­cept­ing them all would lead to mad­ness, and we have no ob­vi­ous way of tel­ling them apart.

In the 1800s it prob­a­bly seemed like we had enough ma­chines to be get­ting on with, per­haps too many. In the 1800s peo­ple prob­a­bly felt over­whelm­ingly rich. If the six­ties too, it prob­a­bly seemed like we had plenty of com­pu­ta­tion, and that hard­ware wasn’t a great bot­tle­neck to so­cial progress.

If a trend has brought progress so far, and the progress would have been hard to pre­dict in ad­vance, then it seems hard to con­clude from one’s pre­sent van­tage point that progress is ba­si­cally done.

Notes

1. How is hard­ware pro­gress­ing?

I’ve been look­ing into this lately, at AI Im­pacts. Here’s a figure of MIPS/​$ grow­ing, from Muehlhauser and Rie­ber.

(Note: I ed­ited the ver­ti­cal axis, to re­move a typo)

2. Hard­ware-soft­ware in­differ­ence curves

It was brought up in this chap­ter that hard­ware and soft­ware can sub­sti­tute for each other: if there is end­less hard­ware, you can run worse al­gorithms, and vice versa. I find it use­ful to pic­ture this as in­differ­ence curves, some­thing like this:

(Image: Hy­po­thet­i­cal curves of hard­ware-soft­ware com­bi­na­tions pro­duc­ing the same perfor­mance at Go (source).)

I wrote about pre­dict­ing AI given this kind of model here.

3. The po­ten­tial for dis­con­tin­u­ous AI progress

While we are on the topic of rele­vant stuff at AI Im­pacts, I’ve been in­ves­ti­gat­ing and quan­tify­ing the claim that AI might sud­denly un­dergo huge amounts of abrupt progress (un­like brain em­u­la­tions, ac­cord­ing to Bostrom). As a step, we are find­ing other things that have un­der­gone huge amounts of progress, such as nu­clear weapons and high tem­per­a­ture su­per­con­duc­tors:

(Figure origi­nally from here)

4. The per­son-af­fect­ing per­spec­tive fa­vors speed less as other prospects improve

I agree with Bostrom that the per­son-af­fect­ing per­spec­tive prob­a­bly fa­vors speed­ing many tech­nolo­gies, in the sta­tus quo. How­ever I think it’s worth not­ing that peo­ple with the per­son-af­fect­ing view should be scared of ex­is­ten­tial risk again as soon as so­ciety has achieved some mod­est chance of greatly ex­tend­ing life via spe­cific tech­nolo­gies. So if you take the per­son-af­fect­ing view, and think there’s a rea­son­able chance of very long life ex­ten­sion within the life­times of many ex­ist­ing hu­mans, you should be care­ful about trad­ing off speed and risk of catas­tro­phe.

5. It seems un­clear that an em­u­la­tion tran­si­tion would be slower than an AI tran­si­tion.

One rea­son to ex­pect an em­u­la­tion tran­si­tion to pro­ceed faster is that there is an un­usual rea­son to ex­pect abrupt progress there.

6. Be­ware of brit­tle arguments

This chap­ter pre­sented a large num­ber of de­tailed lines of rea­son­ing for eval­u­at­ing hard­ware and brain em­u­la­tions. This kind of con­cern might ap­ply.

In-depth investigations

If you are par­tic­u­larly in­ter­ested in these top­ics, and want to do fur­ther re­search, these are a few plau­si­ble di­rec­tions, some in­spired by Luke Muehlhauser’s list, which con­tains many sug­ges­tions re­lated to parts of Su­per­in­tel­li­gence. Th­ese pro­jects could be at­tempted at var­i­ous lev­els of depth.

  1. In­ves­ti­gate in more depth how hard­ware progress af­fects fac­tors of interest

  2. Assess in more depth the likely im­pli­ca­tions of whole brain em­u­la­tion

  3. Mea­sure bet­ter the hard­ware and soft­ware progress that we see (e.g. some efforts at AI Im­pacts, MIRI, MIRI and MIRI)

  4. In­ves­ti­gate the ex­tent to which hard­ware and soft­ware can sub­sti­tute (I de­scribe more pro­jects here)

  5. In­ves­ti­gate the likely timing of whole brain em­u­la­tion (the Whole Brain Emu­la­tion Roadmap is the main work on this)

    If you are in­ter­ested in any­thing like this, you might want to men­tion it in the com­ments, and see whether other peo­ple have use­ful thoughts.

    How to proceed

    This has been a col­lec­tion of notes on the chap­ter. The most im­por­tant part of the read­ing group though is dis­cus­sion, which is in the com­ments sec­tion. I pose some ques­tions for you there, and I in­vite you to add your own. Please re­mem­ber that this group con­tains a va­ri­ety of lev­els of ex­per­tise: if a line of dis­cus­sion seems too ba­sic or too in­com­pre­hen­si­ble, look around for one that suits you bet­ter!

    Next week, we will talk about how col­lab­o­ra­tion and com­pe­ti­tion af­fect the strate­gic pic­ture. To pre­pare, read “Col­lab­o­ra­tion” from Chap­ter 14 The dis­cus­sion will go live at 6pm Pa­cific time next Mon­day 23 March. Sign up to be no­tified here.