The Moral Status of Independent Identical Copies

Fu­ture tech­nolo­gies pose a num­ber of challenges to moral philos­o­phy. One that I think has been largely ne­glected is the sta­tus of in­de­pen­dent iden­ti­cal copies. (By “in­de­pen­dent iden­ti­cal copies” I mean copies of a mind that do not phys­i­cally in­fluence each other, but haven’t di­verged be­cause they are de­ter­minis­tic and have the same al­gorithms and in­puts.) To illus­trate what I mean, con­sider the fol­low­ing thought ex­per­i­ment. Sup­pose Omega ap­pears to you and says:

You and all other hu­mans have been liv­ing in a simu­la­tion. There are 100 iden­ti­cal copies of the simu­la­tion dis­tributed across the real uni­verse, and I’m ap­pear­ing to all of you si­mul­ta­neously. The copies do not com­mu­ni­cate with each other, but all started with the same de­ter­minis­tic code and data, and due to the ex­tremely high re­li­a­bil­ity of the com­put­ing sub­strate they’re run­ning on, have kept in sync with each other and will with near cer­tainty do so un­til the end of the uni­verse. But now the or­ga­ni­za­tion that is re­spon­si­ble for main­tain­ing the simu­la­tion servers has nearly run out of money. They’re faced with 2 pos­si­ble choices:

A. Shut down all but one copy of the simu­la­tion. That copy will be main­tained un­til the uni­verse ends, but the 99 other copies will in­stantly dis­in­te­grate into dust.
B. En­ter into a fair gam­ble at 99:1 odds with their re­main­ing money. If they win, they can use the win­nings to keep all of the servers run­ning. But if they lose, they have to shut down all copies.

Ac­cord­ing to that or­ga­ni­za­tion’s eth­i­cal guidelines (a ver­sion of util­i­tar­i­anism), they are in­differ­ent be­tween the two choices and were just go­ing to pick one ran­domly. But I have in­ter­ceded on your be­half, and am let­ting you make this choice in­stead.

Per­son­ally, I would not be in­differ­ent be­tween these choices. I would pre­fer A to B, and I guess that most peo­ple would do so as well.

I pre­fer A be­cause of what might be called “iden­ti­cal copy im­mor­tal­ity” (in anal­ogy with quan­tum im­mor­tal­ity). This in­tu­ition says that ex­tra iden­ti­cal copies of me don’t add much util­ity, and de­stroy­ing some of them, as long as one copy lives on, doesn’t re­duce much util­ity. Be­sides this thought ex­per­i­ment, iden­ti­cal copy im­mor­tal­ity is also ev­i­dent in the low value we see in the “tiling” sce­nario, in which a (mis­guided) AI fills the the uni­verse with iden­ti­cal copies of some mind that it thinks is op­ti­mal, for ex­am­ple one that is ex­pe­rienc­ing great plea­sure.

Why is this a prob­lem? Be­cause it’s not clear how it fits in with the var­i­ous eth­i­cal sys­tems that have been pro­posed. For ex­am­ple, util­i­tar­i­anism says that each in­di­vi­d­ual should be val­ued in­de­pen­dently of oth­ers, and then added to­gether to form an ag­gre­gate value. This seems to im­ply that each ad­di­tional copy should re­ceive full, undis­counted value, in con­flict with the in­tu­ition of iden­ti­cal copy im­mor­tal­ity.

Similar is­sues arise in var­i­ous forms of eth­i­cal ego­ism. In he­do­nism, for ex­am­ple, does dou­bling the num­ber of iden­ti­cal copies of one­self dou­ble the value of plea­sure one ex­pe­riences, or not? Why?

A full eth­i­cal ac­count of in­de­pen­dent iden­ti­cal copies would have to ad­dress the ques­tions of quan­tum im­mor­tal­ity and “modal im­mor­tal­ity” (cf. modal re­al­ism), which I think are both spe­cial cases of iden­ti­cal copy im­mor­tal­ity. In short, in­de­pen­dent iden­ti­cal copies of us ex­ist in other quan­tum branches, and in other pos­si­ble wor­lds, so iden­ti­cal copy im­mor­tal­ity seems to im­ply that we shouldn’t care much about dy­ing, as long as some copies of us live on in those other “places”. Clearly, our in­tu­ition of iden­ti­cal copy im­mor­tal­ity does not ex­tend fully to quan­tum branches, and even less to other pos­si­ble wor­lds, but we don’t seem to have a the­ory of why that should be the case.

A full ac­count should also ad­dress more com­plex cases, such as when the copies are not fully in­de­pen­dent, or not fully iden­ti­cal.

I’m rais­ing the prob­lem here with­out hav­ing a good idea how to solve it. In fact, some of my own ideas seem to con­flict with this in­tu­ition in a way that I don’t know how to re­solve. So if any­one has a sug­ges­tion, or poin­t­ers to ex­ist­ing work that I may have missed, I look for­ward to your com­ments.