Superintelligence 18: Life in an algorithmic economy

This is part of a weekly read­ing group on Nick Bostrom’s book, Su­per­in­tel­li­gence. For more in­for­ma­tion about the group, and an in­dex of posts so far see the an­nounce­ment post. For the sched­ule of fu­ture top­ics, see MIRI’s read­ing guide.


Wel­come. This week we dis­cuss the eigh­teenth sec­tion in the read­ing guide: Life in an al­gorith­mic econ­omy. This cor­re­sponds to the mid­dle of Chap­ter 11.

This post sum­ma­rizes the sec­tion, and offers a few rele­vant notes, and ideas for fur­ther in­ves­ti­ga­tion. Some of my own thoughts and ques­tions for dis­cus­sion are in the com­ments.

There is no need to pro­ceed in or­der through this post, or to look at ev­ery­thing. Feel free to jump straight to the dis­cus­sion. Where ap­pli­ca­ble and I re­mem­ber, page num­bers in­di­cate the rough part of the chap­ter that is most re­lated (not nec­es­sar­ily that the chap­ter is be­ing cited for the spe­cific claim).

Read­ing: “Life in an al­gorith­mic econ­omy” from Chap­ter 11


Summary

  1. In a mul­ti­po­lar sce­nario, biolog­i­cal hu­mans might lead poor and mea­ger lives. (p166-7)

  2. The AIs might be wor­thy of moral con­sid­er­a­tion, and if so their wellbe­ing might be more im­por­tant than that of the rel­a­tively few hu­mans. (p167)

  3. AI minds might be much like slaves, even if they are not liter­ally. They may be se­lected for lik­ing this. (p167)

  4. Be­cause brain em­u­la­tions would be very cheap to copy, it will of­ten be con­ve­nient to make a copy and then later turn it off (in a sense kil­ling a per­son). (p168)

  5. There are var­i­ous other rea­sons that very short lives might be op­ti­mal for some ap­pli­ca­tions. (p168-9)

  6. It isn’t ob­vi­ous whether brain em­u­la­tions would be happy work­ing all of the time. Some rele­vant con­sid­er­a­tions are cur­rent hu­man emo­tions in gen­eral and re­gard­ing work, prob­a­ble se­lec­tion for pro-work in­di­vi­d­u­als, evolu­tion­ary adap­tive­ness of hap­piness in the past and fu­ture—e.g. does hap­piness help you work harder?--and ab­sence of pre­sent sources of un­hap­piness such as in­jury. (p169-171)

  7. In the long run, ar­tifi­cial minds may not even be con­scious, or have valuable ex­pe­riences, if these are not the most effec­tive ways for them to earn wages. If such minds re­place hu­mans, Earth might have an ad­vanced civ­i­liza­tion with no­body there to benefit. (p172-3)

  8. In the long run, ar­tifi­cial minds may out­source many parts of their think­ing, thus be­com­ing de­creas­ingly differ­en­ti­ated as in­di­vi­d­u­als. (p172)

  9. Evolu­tion does not im­ply pos­i­tive progress. Even those good things that evolved in the past may not with­stand evolu­tion­ary se­lec­tion in a new cir­cum­stance. (p174-6)

Another view

Robin Han­son on oth­ers’ hasty dis­taste for a fu­ture of em­u­la­tions:

Par­ents some­times di­s­own their chil­dren, on the grounds that those chil­dren have be­trayed key parental val­ues. And if par­ents have the sort of val­ues that kids could deeply be­tray, then it does make sense for par­ents to watch out for such be­trayal, ready to go to ex­tremes like di­s­own­ing in re­sponse.

But surely par­ents who feel in­clined to di­s­own their kids should be en­couraged to study their kids care­fully be­fore mak­ing such a choice. For ex­am­ple, par­ents con­sid­er­ing whether to di­s­own their child for re­fus­ing to fight a war for their na­tion, or for work­ing for a cigarette man­u­fac­turer, should won­der to what ex­tend na­tional pa­tri­o­tism or anti-smok­ing re­ally are core val­ues, as op­posed to be­ing mere re­vis­able opinions they col­lected at one point in sup­port of other more-core val­ues. Such par­ents would be wise to study the lives and opinions of their chil­dren in some de­tail be­fore choos­ing to di­s­own them.

I’d like peo­ple to think similarly about my at­tempts to an­a­lyze likely fu­tures. The lives of our de­scen­dants in the next great era af­ter this our in­dus­try era may be as differ­ent from ours’ as ours’ are from farm­ers’, or farm­ers’ are from for­agers’. When they have lived as neigh­bors, for­agers have of­ten strongly crit­i­cized farmer cul­ture, as farm­ers have of­ten strongly crit­i­cized in­dus­try cul­ture. Surely many have been tempted to di­s­own any de­scen­dants who adopted such de­spised new ways. And while such di­s­own­ing might hold them true to core val­ues, if asked we would ad­vise them to con­sider the lives and views of such de­scen­dants care­fully, in some de­tail, be­fore choos­ing to di­s­own.

Similarly, many who live in­dus­try era lives and share in­dus­try era val­ues, may be dis­turbed to see fore­casts of de­scen­dants with life styles that ap­pear to re­ject many val­ues they hold dear. Such peo­ple may be tempted to re­ject such out­comes, and to fight to pre­vent them, per­haps prefer­ring a con­tinu­a­tion of our in­dus­try era to the ar­rival of such a very differ­ent era, even if that era would con­tain far more crea­tures who con­sider their lives worth liv­ing, and be far bet­ter able to pre­vent the ex­tinc­tion of Earth civ­i­liza­tion. And such peo­ple may be cor­rect that such a re­jec­tion and bat­tle holds them true to their core val­ues.

But I ad­vise such peo­ple to first try hard to see this new era in some de­tail from the point of view of its typ­i­cal res­i­dents. See what they en­joy and what fills them with pride, and listen to their crit­i­cisms of your era and val­ues. I hope that my fu­ture anal­y­sis can as­sist such soul-search­ing ex­am­i­na­tion. If af­ter study­ing such de­tail, you still feel com­pel­led to di­s­own your likely de­scen­dants, I can­not con­fi­dently say you are wrong. My job, first and fore­most, is to help you see them clearly.

More on whose lives are worth liv­ing here and here.

Notes

1. Robin Han­son is prob­a­bly the fore­most re­searcher on what the finer de­tails of an econ­omy of em­u­lated hu­man minds would be like. For in­stance, which com­pany em­ploy­ees would run how fast, how big cities would be, whether peo­ple would hang out with their copies. See a TEDx talk, and writ­ings here, here, here and here (some over­lap—sorry). He is also writ­ing a book on the sub­ject, which you can read early if you ask him.

2. Bostrom says,

Life for biolog­i­cal hu­mans in a post-tran­si­tion Malthu­sian state need not re­sem­ble any of the his­tor­i­cal states of man...the ma­jor­ity of hu­mans in this sce­nario might be idle ren­tiers who eke out a marginal liv­ing on their sav­ings. They would be very poor, yet de­rive what lit­tle in­come they have from sav­ings or state sub­sidies. They would live in a world with ex­tremely ad­vanced tech­nol­ogy, in­clud­ing not only su­per­in­tel­li­gent ma­chines but also anti-ag­ing medicine, vir­tual re­al­ity, and var­i­ous en­hance­ment tech­nolo­gies and plea­sure drugs: yet these might be gen­er­ally un­af­ford­able....(p166)

It’s true this might hap­pen, but it doesn’t seem like an es­pe­cially likely sce­nario to me. As Bostrom has pointed out in var­i­ous places ear­lier, biolog­i­cal hu­mans would do quite well if they have some in­vest­ments in cap­i­tal, do not have too much of their prop­erty stolen or art­fully manou­vered away from them, and do not un­dergo too mas­sive pop­u­la­tion growth them­selves. Th­ese risks don’t seem so large to me.

3. Paul Chris­ti­ano has an in­ter­est­ing ar­ti­cle on cap­i­tal ac­cu­mu­la­tion in a world of ma­chine in­tel­li­gence.
4. In dis­cussing wor­lds of brain em­u­la­tions, we of­ten talk about se­lect­ing peo­ple for hav­ing var­i­ous char­ac­ter­is­tics—for in­stance, be­ing ex­tremely pro­duc­tive, hard-work­ing, not mind­ing fre­quent ‘death’, be­ing will­ing to work for free and donate any pro­ceeds to their em­ployer (p167-8). How­ever there are only so many hu­mans to se­lect from, so we can’t nec­es­sar­ily se­lect for all the char­ac­ter­is­tics we might want. Bostrom also talks of us­ing other mo­ti­va­tion se­lec­tion meth­ods, and mod­ify­ing code, but it is in­ter­est­ing to ask how far you could get us­ing only se­lec­tion. It is not ob­vi­ous to what ex­tent one could mean­ingfully mod­ify brain em­u­la­tion code ini­tially.
I’d guess less than one in a thou­sand peo­ple would be will­ing to donate ev­ery­thing to their em­ployer, given a ran­dom em­ployer. This means to get this char­ac­ter­is­tic, you would have to lose a fac­tor of 1000 on se­lect­ing for other traits. All to­gether you have about 33 bits of se­lec­tion power in the pre­sent world (that is, 7 billion is about 2^33; you can di­vide the world in half about 33 times be­fore you get to a sin­gle per­son). Lets sup­pose you use 5 bits in get­ting some­one who both doesn’t mind their copies dy­ing (I guess 1 bit, or half of peo­ple) and who is will­ing to work an 80h/​week (I guess 4 bits, or one in six­teen peo­ple). Lets sup­pose you are us­ing the rest of your se­lec­tion (28 bits) on in­tel­li­gence, for the sake of ar­gu­ment. You are get­ting a per­son of IQ 186. If in­stead you use 10 bits (2^10 = ~1000) on get­ting some­one to donate all their money to their em­ployer, you can only use 18 bits on in­tel­li­gence, get­ting a per­son of IQ 167. Would it not of­ten be bet­ter to have the worker who is twenty IQ points smarter and pay them above sub­sis­tance?
5. A va­ri­ety of valuable uses for cheap to copy, short-lived brain em­u­la­tions are dis­cussed in Whole brain em­u­la­tion and the evolu­tion of su­per­or­ganisms, LessWrong dis­cus­sion on the im­pact of whole brain em­u­la­tion, and Robin’s work cited above.
6. An­ders Sand­berg writes about moral im­pli­ca­tions of em­u­la­tions of an­i­mals and hu­mans.

In-depth investigations

If you are par­tic­u­larly in­ter­ested in these top­ics, and want to do fur­ther re­search, these are a few plau­si­ble di­rec­tions, some in­spired by Luke Muehlhauser’s list, which con­tains many sug­ges­tions re­lated to parts of Su­per­in­tel­li­gence. Th­ese pro­jects could be at­tempted at var­i­ous lev­els of depth.

  1. Is the first func­tional whole brain em­u­la­tion likely to be (1) an em­u­la­tion of low-level func­tion­al­ity that doesn’t re­quire much un­der­stand­ing of hu­man cog­ni­tive neu­ro­science at the com­pu­ta­tional level, as de­scribed in Sand­berg & Bostrom (2008), or is it more likely to be (2) an em­u­la­tion that makes heavy use of ad­vanced hu­man cog­ni­tive neu­ro­science, as de­scribed by (e.g.) Ken Hay­worth, or is it likely to be (3) some­thing else?

  2. Ex­tend and up­date our un­der­stand­ing of when brain em­u­la­tions might ap­pear (see Sand­berg & Bostrom (2008)).

  3. In­ves­ti­gate the like­li­hood of a mul­ti­po­lar out­come?

  4. Fol­low Robin Han­son (see above) in work­ing out the so­cial im­pli­ca­tions of an em­u­la­tion scenario

  5. What kinds of re­sponses to the de­fault low-reg­u­la­tion mul­ti­po­lar out­come out­lined in this sec­tion are likely to be made? e.g. is any strong reg­u­la­tion likely to emerge that avoids the fea­tures de­tailed in the cur­rent sec­tion?

  6. What mea­sures are use­ful for en­sur­ing good mul­ti­po­lar out­comes?

  7. What qual­i­ta­tively differ­ent kinds of mul­ti­po­lar out­comes might we ex­pect? e.g. brain em­u­la­tion out­comes are one class.

If you are in­ter­ested in any­thing like this, you might want to men­tion it in the com­ments, and see whether other peo­ple have use­ful thoughts.

How to proceed

This has been a col­lec­tion of notes on the chap­ter. The most im­por­tant part of the read­ing group though is dis­cus­sion, which is in the com­ments sec­tion. I pose some ques­tions for you there, and I in­vite you to add your own. Please re­mem­ber that this group con­tains a va­ri­ety of lev­els of ex­per­tise: if a line of dis­cus­sion seems too ba­sic or too in­com­pre­hen­si­ble, look around for one that suits you bet­ter!

Next week, we will talk about the pos­si­bil­ity of a mul­ti­po­lar out­come turn­ing into a sin­gle­ton later. To pre­pare, read “Post-tran­si­tion for­ma­tion of a sin­gle­ton?” from Chap­ter 11. The dis­cus­sion will go live at 6pm Pa­cific time next Mon­day 19 Jan­uary. Sign up to be no­tified here.