Superintelligence 19: Post-transition formation of a singleton

This is part of a weekly read­ing group on Nick Bostrom’s book, Su­per­in­tel­li­gence. For more in­for­ma­tion about the group, and an in­dex of posts so far see the an­nounce­ment post. For the sched­ule of fu­ture top­ics, see MIRI’s read­ing guide.


Wel­come. This week we dis­cuss the nine­teenth sec­tion in the read­ing guide: post-tran­si­tion for­ma­tion of a sin­gle­ton. This cor­re­sponds to the last part of Chap­ter 11.

This post sum­ma­rizes the sec­tion, and offers a few rele­vant notes, and ideas for fur­ther in­ves­ti­ga­tion. Some of my own thoughts and ques­tions for dis­cus­sion are in the com­ments.

There is no need to pro­ceed in or­der through this post, or to look at ev­ery­thing. Feel free to jump straight to the dis­cus­sion. Where ap­pli­ca­ble and I re­mem­ber, page num­bers in­di­cate the rough part of the chap­ter that is most re­lated (not nec­es­sar­ily that the chap­ter is be­ing cited for the spe­cific claim).

Read­ing: : “Post-tran­si­tion for­ma­tion of a sin­gle­ton?” from Chap­ter 11


Summary

  1. Even if the world re­mains mul­ti­po­lar through a tran­si­tion to ma­chine in­tel­li­gence, a sin­gle­ton might emerge later, for in­stance dur­ing a tran­si­tion to a more ex­treme tech­nol­ogy. (p176-7)

  2. If ev­ery­thing is faster af­ter the first tran­si­tion, a sec­ond tran­si­tion may be more or less likely to pro­duce a sin­gle­ton. (p177)

  3. Emu­la­tions may give rise to ‘su­per­or­ganisms’: clans of em­u­la­tions who care wholly about their group. Th­ese would have an ad­van­tage be­cause they could avoid agency prob­lems, and make var­i­ous uses of the abil­ity to delete mem­bers. (p178-80)

  4. Im­prove­ments in surveillance re­sult­ing from ma­chine in­tel­li­gence might al­low bet­ter co­or­di­na­tion, how­ever ma­chine in­tel­li­gence will also make con­ceal­ment eas­ier, and it is un­clear which force will be stronger. (p180-1)

  5. Ma­chine minds may be able to make clearer pre­com­mit­ments than hu­mans, chang­ing the na­ture of bar­gain­ing some­what. Maybe this would pro­duce a sin­gle­ton. (p183-4)

Another view

Many of the ideas around su­per­or­ganisms come from Carl Shul­man’s pa­per, Whole Brain Emu­la­tion and the Evolu­tion of Su­per­or­ganisms. Robin Han­son cri­tiques it:

...It seems to me that Shul­man ac­tu­ally offers two some­what differ­ent ar­gu­ments, 1) an ab­stract ar­gu­ment that fu­ture evolu­tion gener­i­cally leads to su­per­or­ganisms, be­cause their costs are gen­er­ally less than their benefits, and 2) a more con­crete ar­gu­ment, that em­u­la­tions in par­tic­u­lar have es­pe­cially low costs and high benefits...

...On the gen­eral ab­stract ar­gu­ment, we see a com­mon pat­tern in both the evolu­tion of species and hu­man or­ga­ni­za­tions — while win­ning sys­tems of­ten en­force sub­stan­tial value shar­ing and loy­alty on small scales, they achieve much less on larger scales. Values tend to be more in­te­grated in a sin­gle or­ganism’s brain, rel­a­tive to larger fam­i­lies or species, and in a team or firm, rel­a­tive to a na­tion or world. Value co­or­di­na­tion seems hard, es­pe­cially on larger scales.

This is not es­pe­cially puz­zling the­o­ret­i­cally. While there can be huge gains to co­or­di­na­tion, es­pe­cially in war, it is far less ob­vi­ous just how much one needs value shar­ing to gain ac­tion co­or­di­na­tion. There are many other fac­tors that in­fluence co­or­di­na­tion, af­ter all; even perfect value match­ing is con­sis­tent with quite poor co­or­di­na­tion. It is also far from ob­vi­ous that val­ues in generic large minds can eas­ily be sep­a­rated from other large mind parts. When the parts of large sys­tems evolve in­de­pen­dently, to adapt to differ­ing lo­cal cir­cum­stances, their val­ues may also evolve in­de­pen­dently. De­tect­ing and elimi­nat­ing value di­ver­gences might in gen­eral be quite ex­pen­sive.

In gen­eral, it is not at all ob­vi­ous that the benefits of more value shar­ing are worth these costs. And even if more value shar­ing is worth the costs, that would only im­ply that value-shar­ing en­tities should be a bit larger than they are now, not that they should shift to a world-en­com­pass­ing ex­treme.

On Shul­man’s more con­crete ar­gu­ment, his sug­gested sin­gle-ver­sion ap­proach to em value shar­ing, wherein a sin­gle cen­tral em only al­lows (per­haps vast num­bers of) brief copies, can suffer from greatly re­duced in­no­va­tion. When em copies are as­signed to and adapt to differ­ent tasks, there may be no easy way to merge their minds into a sin­gle com­mon mind con­tain­ing all their adap­ta­tions. The sin­gle em copy that is best at do­ing an av­er­age of tasks, may be much worse at each task than the best em for that task.

Shul­man’s other con­crete sug­ges­tion for shar­ing em val­ues is “psy­cholog­i­cal test­ing, staged situ­a­tions, and di­rect ob­ser­va­tion of their em­u­la­tion soft­ware to form clear pic­tures of their loy­alties.” But ge­netic and cul­tural evolu­tion has long tried to make hu­man minds fit well within strongly loyal teams, a task to which we seem well adapted. This sug­gests that mov­ing our minds closer to a “borg” team ideal would cost us some­where else, such as in our men­tal ag­ility.

On the con­crete co­or­di­na­tion gains that Shul­man sees from su­per­or­ganism ems, most of these gains seem cheaply achiev­able via sim­ple long-stan­dard hu­man co­or­di­na­tion mechanisms: prop­erty rights, con­tracts, and trade. In­di­vi­d­ual farm­ers have long faced star­va­tion if they could not ex­tract enough food from their prop­erty, and farm­ers were of­ten out-com­peted by oth­ers who used re­sources more effi­ciently.

With ems there is the added ad­van­tage that em copies can agree to the “terms” of their life deals be­fore they are cre­ated. An em would agree that it starts life with cer­tain re­sources, and that life will end when it can no longer pay to live. Yes there would be some se­lec­tion for hu­mans and ems who peace­fully ac­cept such deals, but prob­a­bly much less than needed to get loyal de­vo­tion to and shared val­ues with a su­per­or­ganism.

Yes, with high value shar­ing ems might be less tempted to steal from other copies of them­selves to sur­vive. But this hardly im­plies that such ems no longer need prop­erty rights en­forced. They’d need prop­erty rights to pre­vent theft by copies of other ems, in­clud­ing be­ing en­slaved by them. Once a prop­erty rights sys­tem ex­ists, the ad­di­tional cost of ap­ply­ing it within a set of em copies seems small rel­a­tive to the likely costs of strong value shar­ing.

Shul­man seems to ar­gue both that su­per­or­ganisms are a nat­u­ral end­point of evolu­tion, and that ems are es­pe­cially sup­port­ive of su­per­or­ganisms. But at most he has shown that ems or­ga­ni­za­tions may be at a some­what larger scale, not that they would reach civ­i­liza­tion-en­com­pass­ing scales. In gen­eral, crea­tures who share val­ues can in­deed co­or­di­nate bet­ter, but per­haps not by much, and it can be costly to achieve and main­tain shared val­ues. I see no co­or­di­nate-by-val­ues free lunch...

Notes

1. The nat­u­ral endpoint

Bostrom says that a sin­gle­ton is nat­u­ral con­clu­sion of long-term trend to­ward larger scales of poli­ti­cal in­te­gra­tion (p176). It seems helpful here to be more pre­cise about what we mean by sin­gle­ton. Some­thing like a world gov­ern­ment does seem to be a nat­u­ral con­clu­sion to long term trends. How­ever this seems differ­ent to the kind of sin­gle­ton I took Bostrom to pre­vi­ously be talk­ing about. A world gov­ern­ment would by de­fault only make a cer­tain class of de­ci­sions, for in­stance about global level poli­cies. There has been a long term trend for the largest poli­ti­cal units to be­come larger, how­ever there have always been smaller units as well, mak­ing differ­ent classes of de­ci­sions, down to the in­di­vi­d­ual. I’m not sure how to mea­sure the mass of de­ci­sions made by differ­ent par­ties, but it seems like the in­di­vi­d­u­als may be mak­ing more de­ci­sions more freely than ever, and the large poli­ti­cal units have less abil­ity than they once did to act against the will of the pop­u­la­tion. So the long term trend doesn’t seem to point to an over­pow­er­ing ruler of ev­ery­thing.

2. How value-al­igned would em­u­lated copies of the same per­son be?

Bostrom doesn’t say ex­actly how ‘em­u­la­tions that were wholly al­tru­is­tic to­ward their copy-siblings’ would emerge. It seems to be some com­bi­na­tion of nat­u­ral ‘al­tru­ism’ to­ward one­self and se­lec­tion for peo­ple who re­act to copies of them­selves with ex­treme al­tru­ism (con­firmed by a longer in­ter­est­ing dis­cus­sion in Shul­man’s pa­per). How eas­ily one might se­lect for such peo­ple de­pends on how hu­mans gen­er­ally re­act to be­ing copied. In par­tic­u­lar, whether they treat a copy like part of them­selves, or merely like a very similar ac­quain­tance.

The an­swer to this doesn’t seem ob­vi­ous. Copies seem likely to agree strongly on ques­tions of global val­ues, such as whether the world should be more cap­i­tal­is­tic, or whether it is ad­mirable to work in tech­nol­ogy. How­ever I ex­pect many—per­haps most—failures of co­or­di­na­tion come from differ­ences in self­ish val­ues—e.g. I want me to have money, and you want you to have money. And if you copy a per­son, it seems fairly likely to me the copies will both still want the money them­selves, more or less.

From other ex­am­ples of similar peo­ple—iden­ti­cal twins, fam­ily, peo­ple and their fu­ture selves—it seems peo­ple are un­usu­ally al­tru­is­tic to similar peo­ple, but still very far from ‘wholly al­tru­is­tic’. Emu­la­tion siblings would be much more similar than iden­ti­cal twins, but who knows how far that would move their al­tru­ism?

Shul­man points out that many peo­ple hold views about per­sonal iden­tity that would im­ply that copies share iden­tity to some ex­tent. The trans­la­tion be­tween philo­soph­i­cal views and ac­tual mo­ti­va­tions is not always com­plete how­ever.

3. Con­tem­po­rary fam­ily clans

Fam­ily-run firms are a place to get some in­for­ma­tion about the trade-off be­tween re­duc­ing agency prob­lems and hav­ing ac­cess to a wide range of po­ten­tial em­ploy­ees. Given a brief pe­rusal of the in­ter­net, it seems to be am­bigu­ous whether they do bet­ter. One could try to sep­a­rate out the fac­tors that help them do bet­ter or worse.

4. How big a prob­lem is dis­loy­alty?

I won­dered how big a prob­lem in­sider dis­loy­alty re­ally was for com­pa­nies and other or­ga­ni­za­tions. Would it re­ally be worth all this loy­alty test­ing? I can’t find much about it quickly, but 59% of re­spon­dents to a sur­vey ap­par­ently said they had some kind of prob­lems with in­sid­ers. The same re­port sug­gests that a bunch of costly ini­ti­a­tives such as in­ten­sive psy­cholog­i­cal test­ing are cur­rently on the table to ad­dress the prob­lem. Also ap­par­ently it’s enough of a prob­lem for some­one to be try­ing to solve it with mind-read­ing, though that prob­a­bly doesn’t say much.

5. AI already con­tribut­ing to the surveillance-se­crecy arms race

Ar­tifi­cial in­tel­li­gence will help with surveillance sooner and more broadly than in the ob­ser­va­tion of peo­ple’s mo­tives. e.g. here and here.

6. SMBC is also pon­der­ing these top­ics this week


In-depth investigations

If you are par­tic­u­larly in­ter­ested in these top­ics, and want to do fur­ther re­search, these are a few plau­si­ble di­rec­tions, some in­spired by Luke Muehlhauser’s list, which con­tains many sug­ges­tions re­lated to parts of Su­per­in­tel­li­gence. Th­ese pro­jects could be at­tempted at var­i­ous lev­els of depth.

  1. What are the pre­sent and his­tor­i­cal bar­ri­ers to co­or­di­na­tion, be­tween peo­ple and or­ga­ni­za­tions? How much have these been low­ered so far? How much differ­ence has it made to the scale of or­ga­ni­za­tions, and to pro­duc­tivity? How much fur­ther should we ex­pect these bar­ri­ers to be less­ened as a re­sult of ma­chine in­tel­li­gence?

  2. In­ves­ti­gate the im­pli­ca­tions of ma­chine in­tel­li­gence for surveillance and se­crecy in more depth.

  3. Are mul­ti­po­lar sce­nar­ios safer than sin­gle­ton sce­nar­ios? Muehlhauser sug­gests di­rec­tions.

  4. Ex­plore ideas for safety in a sin­gle­ton sce­nario via tem­porar­ily mul­ti­po­lar AI. e.g. up­load­ing FAI re­searchers (See Sala­mon & Shul­man, “Whole Brain Emu­la­tion, as a plat­form for cre­at­ing safe AGI.”)

  5. Which kinds of mul­ti­po­lar sce­nar­ios would be more likely to re­solve into a sin­gle­ton, and how quickly?

  6. Can we get whole brain em­u­la­tion with­out pro­duc­ing neu­ro­mor­phic AGI slightly ear­lier or shortly af­ter­ward? See sec­tion 3.2 of Eck­er­sley & Sand­berg (2013).

If you are in­ter­ested in any­thing like this, you might want to men­tion it in the com­ments, and see whether other peo­ple have use­ful thoughts.

How to proceed

This has been a col­lec­tion of notes on the chap­ter. The most im­por­tant part of the read­ing group though is dis­cus­sion, which is in the com­ments sec­tion. I pose some ques­tions for you there, and I in­vite you to add your own. Please re­mem­ber that this group con­tains a va­ri­ety of lev­els of ex­per­tise: if a line of dis­cus­sion seems too ba­sic or too in­com­pre­hen­si­ble, look around for one that suits you bet­ter!

Next week, we will talk about the ‘value load­ing prob­lem’. To pre­pare, read “The value-load­ing prob­lem” through “Mo­ti­va­tional scaf­fold­ing” from Chap­ter 12. The dis­cus­sion will go live at 6pm Pa­cific time next Mon­day 26 Jan­uary. Sign up to be no­tified here.