Replaceability as a virtue

I pro­pose it is al­tru­is­tic to be re­place­able and there­fore, those who strive to be al­tru­is­tic should strive to be re­place­able.

As far as I can Google, this does not seem to have been pro­posed be­fore. LW should be a good place to dis­cuss it. A com­mu­nity in­ter­ested in ra­tio­nal and eth­i­cal be­hav­ior, and in how su­per­in­tel­li­gent ma­chines may de­cide to re­place mankind, should at least bother to re­fute the fol­low­ing ar­gu­ment.

Replaceability

Re­place­abil­ity is “the state of be­ing re­place­able”. It isn’t bi­nary. The price of the re­place­ment mat­ters: so a cookie is more re­place­able than a big wed­ding cake. Ad­e­quacy of the re­place­ment also makes a differ­ence: a pis­ton for an an­cient Rolls Royce is less re­place­able than one in a mod­ern car, be­cause it has to be hand-crafted and will be dis­t­in­guish­able. So some­thing is more or less re­place­able de­pend­ing on the price and qual­ity of its re­place­ment.

Re­place­abil­ity could be thought of as the in­verse of the cost of hav­ing to re­place some­thing. Some­thing that’s very re­place­able has a low cost of re­place­ment, while some­thing that lacks re­place­abil­ity has a high (up to un­fea­si­ble) cost of re­place­ment. The cost of re­place­ment plays into To­tal Cost of Own­er­ship, and ev­ery­thing economists know about that ap­plies. It seems pretty ob­vi­ous that re­place­abil­ity of pos­ses­sions is good, much like cheap availa­bil­ity is good.

Some things (his­tor­i­cal ar­ti­facts, art pieces) are val­ued highly pre­cisely be­cause of their ir­re­pla­ca­bil­ity. Although a few things could be said about the re­sale value of such ob­jects, I’ll sim­plify and con­tend these val­u­a­tions are not ra­tio­nal.

The prac­ti­cal example

Anne man­ages the cen­tral database of Beth’s com­pany. She’s the only one who has ac­cess to that database, the skil­lset re­quired for man­ag­ing it, and an un­der­stand­ing of how it all works; she has a monopoly to that com­bi­na­tion.

This monopoly gives Anne con­trol over her own re­place­ment cost. If she works ac­cord­ing to the state of the art, writes ex­ten­sive and up-to-date doc­u­men­ta­tion, makes proper back­ups etc she can be very re­place­able, be­cause her monopoly will be eas­ily bro­ken. If she re­fuses to ex­plain what she’s do­ing, cre­ates weird and frag­ile workarounds and doc­u­ments the database badly she can re­duce her re­place­abil­ity and defend her monopoly. (A well-obfus­cated database can take months for a re­place­ment database man­ager to han­dle con­fi­dently.)

So Beth may still choose to re­place Anne, but Anne can in­fluence how ex­pen­sive that’ll be for Beth. She can at least make sure her re­place­ment needs to be shown the ropes, so she can’t be fired on a whim. But she might go fur­ther and prac­ti­cally hold the database hostage, which would cer­tainly help her in salary ne­go­ti­a­tions if she does it right.

This makes it pretty clear how Anne can act al­tru­is­ti­cally in this situ­a­tion, and how she can act self­ishly. Doesn’t it?

The moral argument

To Anne, her re­place­ment cost is an ex­ter­nal­ity and an in­fluence on the length and terms of her em­ploy­ment. To max­i­mize the length of her em­ploy­ment and her salary, her re­place­ment cost would have to be high.

To Beth, Anne’s re­place­ment cost is part of the cost of em­ploy­ing her and of course she wants it to be low. This is true for any pair of em­ployer and em­ployee: Anne is un­usual only in that she has a great de­gree of in­fluence on her re­place­ment cost.

There­fore, if Anne doc­u­ments her database prop­erly etc, this in­creases her re­place­abil­ity and con­sti­tutes al­tru­is­tic be­hav­ior. Un­less she val­ues the pos­i­tive feel­ing of do­ing her em­ployer a fa­vor more highly than she val­ues the money she might make by avoid­ing re­place­ment, this might even be true al­tru­ism.

Un­less I suck at Google, re­place­abil­ity doesn’t seem to have been dis­cussed as an as­pect of al­tru­ism. The two rea­sons for that I can see are:

  • re­plac­ing peo­ple is painful to think about

  • and it seems fu­tile as long as peo­ple aren’t re­place­able in more than very spe­cific func­tions any­way.

But we don’t want or get the choice to kill one per­son to save the life of five, ei­ther, and such prac­ti­cal im­prob­a­bil­ities shouldn’t stop us from con­sid­er­ing our moral de­ci­sions. This is es­pe­cially true in a world where copies, and hence re­place­ments, of peo­ple are start­ing to look pos­si­ble at least in prin­ci­ple.

  1. In some rea­son­ably-near fu­ture, soft­ware is get­ting bet­ter at mod­el­ing peo­ple. We still don’t know what makes a pro­cess in­tel­li­gent, but we can feed a cou­ple of videos and a bunch of psy­cholog­i­cal data points into a peo­ple mod­eler, ex­trap­o­late ev­ery­thing else us­ing a stan­dard pop­u­la­tion and the re­sult­ing model can have a con­ver­sa­tion that could fool a four-year-old. The tech­nol­ogy is already good enough for mod­els of pets. While con­vinc­ing mod­els of com­plex per­son­al­ities are at least an­other decade away, the tech is start­ing to be­come good enough for se­nile grand­moth­ers.

    Ob­vi­ously no-one wants granny to die. But the kids would like to keep a model of granny, and they’d like to make the model be­fore the Alzheimer’s gets any worse, while granny is ter­rified she’ll get no more vis­its to her re­tire­ment home.

    What’s the eth­i­cal thing to do here? Surely the rel­a­tives should keep vis­it­ing granny. Could granny maybe have a model made, but keep it to her­self, for re­lease only through her Last Will and Tes­ta­ment? And wouldn’t it be truly awful of her to re­fuse to do that?

  2. Only slightly fur­ther into the fu­ture, we’re still mor­tal, but cry­on­ics does ap­pear to be work­ing. Un­frozen peo­ple need reg­u­lar med­i­cal aid, but the tech­nol­ogy is only get­ting bet­ter and any­way, the point is: some­thing we can be­lieve to be them can in­deed come back.

    Some re­fuse to wait out these Dark Ages; they get them­selves frozen for non­med­i­cal rea­sons, to fast­for­ward across decades or cen­turies into a time when the re­ally awe­some stuff will be hap­pen­ing, and to get the im­mor­tal­ity tech­nolo­gies they hope will be de­vel­oped by then.

    In this sce­nario, wouldn’t fast­for­warders be con­sid­ered self­ish, be­cause they im­pose on their friends the pain of their ab­sence? And wouldn’t their friends mind it less if the fast­for­warders went to the trou­ble of hav­ing a good model (see above) made first?

  3. On some dis­tant fu­ture Earth, minds can be up­loaded com­pletely. Brains can be mod­eled and recre­ated so effec­tively that peo­ple can make liv­ing, breath­ing copies of them­selves and ex­pe­rience the in­abil­ity to tell which in­stance is the copy and which is the origi­nal.

    Of course many ad­her­ents of soul the­o­ries re­ject this as blas­phe­mous. A cou­ple more so­phis­ti­cated thinkers worry if this doesn’t de­value in­di­vi­d­u­als to the point where su­per­hu­man AIs might con­clude that as long as copies of ev­ery­one are stored on some hard drive or­bit­ing Pluto, noth­ing of value is lost if ev­ery meat­body gets de­voured into more hard­ware. Bot­tom line is: Effec­tive im­mor­tal­ity is available, but some re­fuse it out of prin­ci­ple.

    In this world, wouldn’t those who make them­selves fully and in­finitely re­place­able want the same for ev­ery­one they love? Wouldn’t they con­sider it a dread­ful im­po­si­tion if a friend or rel­a­tive re­fused im­mor­tal­ity? After all, wasn’t not hav­ing to say good­bye any­more kind of the point?

Th­ese ques­tions haven’t come up in the real world be­cause peo­ple have never been re­place­able in more than very spe­cific func­tions. But I hope you’ll agree that if and when peo­ple be­come more re­place­able, that will be re­garded as a good thing, and it will be re­garded as vir­tu­ous to use these tech­nolo­gies as they be­come available, be­cause it spares one’s friends and fam­ily some or all of the cost of re­plac­ing one­self.

Re­place­abil­ity as an al­tru­ist virtue

And if re­place­abil­ity is al­tru­is­tic in this hy­po­thet­i­cal fu­ture, as well as in the limited sense of Anne and Beth, that im­plies re­place­abil­ity is al­tru­is­tic now. And even now, there are things we can do to in­crease our re­place­abil­ity, i.e. to re­duce the cost our be­reaved will in­cur when they have to re­place us. We can teach all our (valuable) skills, so oth­ers can re­place us as providers of these skills. We can not have (rele­vant) se­crets, so oth­ers can learn what we know and re­place us as sources of that knowl­edge. We can en­deav­our to live as long as pos­si­ble, to post­pone the cost. We can sign up for cry­on­ics. There are surely other things each of us could do to in­crease our re­place­abil­ity, but I can’t think of any an al­tru­ist wouldn’t con­sider vir­tu­ous.

As an al­tru­ist, I con­clude that re­place­abil­ity is a proso­cial, un­selfish trait, some­thing we’d want our friends to have, in other words: a virtue. I’d go as far as to say that even both­er­ing to set up a good Last Will and Tes­ta­ment is vir­tu­ous pre­cisely be­cause it re­duces the cost my be­reaved will in­cur when they have to re­place me. And al­though none of us can be truly eas­ily re­place­able as of yet, I sug­gest we honor those who make them­selves re­place­able, and are proud of what­ever re­place­abil­ity we our­selves at­tain.

So, how re­place­able are you?