“Ray Kurzweil and Uploading: Just Say No!”, Nick Agar

A new pa­per has gone up in the Novem­ber 2011 JET: “Ray Kurzweil and Upload­ing: Just Say No!” (videos) by Nick Agar (Wikipe­dia); ab­stract:

There is a de­bate about the pos­si­bil­ity of mind-up­load­ing – a pro­cess that pur­port­edly trans­fers hu­man minds and there­fore hu­man iden­tities into com­put­ers. This pa­per by­passes the de­bate about the meta­physics of mind-up­load­ing to ad­dress the ra­tio­nal­ity of sub­mit­ting your­self to it. I ar­gue that an in­e­liminable risk that mind-up­load­ing will fail makes it pru­den­tially ir­ra­tional for hu­mans to un­dergo it.


The ar­gu­ment is a var­i­ant of Pas­cal’s wa­ger he calls Searle’s wa­ger. As far as I can tell, the pa­per con­tains mostly ideas he has already writ­ten on in his book; from Michael Hauskel­ler’s re­view of Agar’s Hu­man­ity’s End: Why We Should Re­ject Rad­i­cal Enhancement

Start­ing with Kurzweil, he gives a de­tailed ac­count of the lat­ter’s “Law of Ac­cel­er­at­ing Re­turns” and the en­su­ing techno-op­ti­mism, which leads Kurzweil to be­lieve that we will even­tu­ally be able to get rid of our messy bod­ies and gain vir­tual im­mor­tal­ity by up­load­ing our­selves into a com­puter. The whole idea is lu­dicrous, of course, but Agar takes it quite se­ri­ously and tries hard to con­vince us that “it may take longer than Kurzweil thinks for us to know enough about the hu­man brain to suc­cess­fully up­load it” (45) – as if this lack of knowl­edge was the main ob­sta­cle to mind-up­load­ing. Agar’s prin­ci­pal ob­jec­tion, how­ever, is that it will always be ir­ra­tional for us to up­load our minds onto com­put­ers, be­cause we will never be able to com­pletely rule out the pos­si­bil­ity that, in­stead of con­tin­u­ing to live, we will sim­ply die and be re­placed by some­thing that may be con­scious or un­con­scious, but in any case is not iden­ti­cal with us. While this is cer­tainly a rea­son­able ob­jec­tion, the way Agar pre­sents it is rather odd. He takes Pas­cal’s ‘Wager’ (which was de­signed to con­vince us that be­liev­ing in God is always the ra­tio­nal thing to do, be­cause by do­ing so we have lit­tle to lose and a lot to win) and re­fash­ions it so that it ap­pears ir­ra­tional to up­load one’s mind, be­cause the pro­ce­dure might end in death, whereas re­fus­ing to up­load will keep us al­ive and is hence always a safe bet. The lat­ter con­clu­sion does not work, of course, since the whole point of mind-up­load­ing is to es­cape death (which is un­avoid­able as long as we are stuck with our mor­tal, or­ganic bod­ies). Agar ar­gues, how­ever, that by the time we are able to up­load minds to com­put­ers, other life ex­ten­sion tech­nolo­gies will be available, so that up­load­ing will no longer be an at­trac­tive op­tion. This seems to be a cu­ri­ously techno-op­ti­mistic view to take.

John Dana­her (User:JohnD) ex­am­ines the wa­ger, as ex­pressed in the book, fur­ther in 2 blog posts:

  1. “Should we Upload Our Minds? Agar on Searle’s Wager (Part One)”

  2. “Should we Upload Our Minds? Agar on Searle’s Wager (Part Two)”

After lay­ing out what seems to be Agar’s ar­gu­ment, Dana­her con­structs the game-the­o­retic tree and con­tinues the crit­i­cism above:

The ini­tial force of the Sear­lian Wager de­rives from recog­nis­ing the pos­si­bil­ity that Weak AI is true. For if Weak AI is true, the act of up­load­ing would effec­tively amount to an act of self-de­struc­tion. But recog­nis­ing the pos­si­bil­ity that Weak AI is true is not enough to sup­port the ar­gu­ment. Ex­pected util­ity calcu­la­tions can of­ten have strange and coun­ter­in­tu­itive re­sults. To know what we should re­ally do, we have to know whether the fol­low­ing in­equal­ity re­ally holds (num­ber­ing fol­lows part one):

  • (6) Eu(~U) > Eu(U)

But there’s a prob­lem: we have no figures to plug into the rele­vant equa­tions, and even if we did come up with figures, peo­ple would prob­a­bly dis­pute them (“You’re un­der­es­ti­mat­ing the benefits of up­load­ing”, “You’re un­der­es­ti­mat­ing the costs of up­load­ing” etc. etc.). So what can we do? Agar em­ploys an in­ter­est­ing strat­egy. He reck­ons that if he can show that the fol­low­ing two propo­si­tions hold true, he can defend (6).
  • (8) Death (out­come c) is much worse for those con­sid­er­ing to up­load than liv­ing (out­come b or d).

  • (9) Upload­ing and sur­viv­ing (a) is not much bet­ter, and pos­si­bly worse, than not up­load­ing and liv­ing (b or d).

2. A Fate Worse than Death?
On the face of it, (8) seems to be ob­vi­ously false. There would ap­pear to be con­texts in which the risk of self-de­struc­tion does not out­weigh the po­ten­tial benefit (how­ever im­prob­a­ble) of con­tinued ex­is­tence. Such a con­text is of­ten ex­ploited by the pur­vey­ors of cry­on­ics. It looks some­thing like this:

You have re­cently been di­ag­nosed with a ter­mi­nal ill­ness. The doc­tors say you’ve got six months to live, tops. They tell you to go home, get your house in or­der, and pre­pare to die. But you’re hav­ing none of it. You re­cently read some ad­verts for a cry­on­ics com­pany in Cal­ifor­nia. For a fee, they will freeze your dis­ease-rid­den body (or just the brain!) to a cool −196 C and keep it in stor­age with in­struc­tions that it only be thawed out at such a time when a cure for your ill­ness has been found. What a great idea, you think to your­self. Since you’re go­ing to die any­way, why not take the chance (make the bet) that they’ll be able to re­sus­ci­tate and cure you in the fu­ture? After all, you’ve got noth­ing to lose.

This is a per­sua­sive ar­gu­ment. Agar con­cedes as much. But he thinks the wa­ger fac­ing our po­ten­tial up­loader is go­ing to be cru­cially differ­ent from that fac­ing the cry­on­ics pa­tient. The up­loader will not face the choice be­tween cer­tain death, on the one hand, and pos­si­ble death/​pos­si­ble sur­vival, on the other. No; the up­loader will face the choice be­tween con­tinued biolog­i­cal ex­is­tence with biolog­i­cal en­hance­ments, on the one hand, and pos­si­ble death/​pos­si­ble sur­vival (with elec­tronic en­hance­ments), on the other.

The rea­son has to do with the kinds of tech­nolog­i­cal won­ders we can ex­pect to have de­vel­oped by the time we figure out how to up­load our minds. Agar reck­ons we can ex­pect such won­ders to al­low for the in­definite con­tinu­ance of biolog­i­cal ex­is­tence. To sup­port his point, he ap­peals to the ideas of Aubrey de Grey. de Grey thinks that—given ap­pro­pri­ate fund­ing—med­i­cal tech­nolo­gies could soon help us to achieve longevity es­cape ve­loc­ity (LEV). This is when new anti-ag­ing ther­a­pies con­sis­tently add years to our life ex­pec­tan­cies faster than age con­sumes them.

If we do achieve LEV, and we do so be­fore we achieve up­load­abil­ity, then premise (8) would seem defen­si­ble. Note that this ar­gu­ment does not ac­tu­ally re­quire LEV to be highly prob­a­ble. It only re­quires it to be rel­a­tively more prob­a­ble than the com­bi­na­tion of up­load­abil­ity and Strong AI.
...3. Don’t you want Wikipe­dia on the Brain?
Premise (9) is a lit­tle trick­ier. It pro­poses that the benefits of con­tinued biolog­i­cal ex­is­tence are not much worse (and pos­si­bly bet­ter) than the benefits of Kur­weil-ian up­load­ing. How can this be defended? Agar pro­vides us with two rea­sons.

The first re­lates to the dis­con­nect be­tween our sub­jec­tive per­cep­tion of value and the ob­jec­tive re­al­ity. Agar points to find­ings in ex­per­i­men­tal eco­nomics that sug­gest we have a non-lin­ear ap­pre­ci­a­tion of value. I’ll just quote him di­rectly since he ex­plains the point pretty well:

For most of us, a prize of $100,000,000 is not 100 times bet­ter than one of $1,000,000. We would not trade a ticket in a lot­tery offer­ing a one-in-ten chance of win­ning $1,000,000 for one that offers a one-in-a-thou­sand chance of win­ning $100,000,000, even when in­formed that both tick­ets yield an ex­pected re­turn of $100,000....We have no difficulty in rec­og­niz­ing the big­ger prize as bet­ter than the smaller one. But we don’t pre­fer it to the ex­tent that it’s ob­jec­tively...The con­ver­sion of ob­jec­tive mon­e­tary val­ues into sub­jec­tive benefits re­veals the one-in-ten chance at $1,000,000 to be sig­nifi­cantly bet­ter than the one-in-a-thou­sand chance at $100,000,000 (pp. 68-69).

How do these quirks of sub­jec­tive value af­fect the wa­ger ar­gu­ment? Well, the idea is that con­tinued biolog­i­cal ex­is­tence with LEV is akin to the one-in-ten chance of $1,000,000, while up­load­ing is akin to the one-in-a-thou­sand chance of $100,000,000: peo­ple are go­ing to pre­fer the former to the lat­ter, even if the lat­ter might yield the same (or even a higher) pay­off.

I have two con­cerns about this. First, my origi­nal for­mu­la­tion of the wa­ger ar­gu­ment re­lied on the straight­for­ward ex­pected-util­ity-max­imi­sa­tion-prin­ci­ple of ra­tio­nal choice. But by ap­peal­ing to the risks as­so­ci­ated with the re­spec­tive wa­gers, Agar would seem to be in­cor­po­rat­ing some el­e­ment of risk aver­sion into his preferred ra­tio­nal­ity prin­ci­ple. This would force a re­vi­sion of the origi­nal ar­gu­ment (premise 5 in par­tic­u­lar), albeit one that works in Agar’s favour. Se­cond, the use of sub­jec­tive val­u­a­tions might af­fect our in­ter­pre­ta­tion of the ar­gu­ment. In par­tic­u­lar it raises the ques­tion: Is Agar say­ing that this is how peo­ple will in fact re­act to the up­load­ing de­ci­sion, or is he say­ing that this is how they should re­act to the de­ci­sion?

One point is worth not­ing: the asym­me­try of up­load­ing with cry­on­ics is de­liber­ate. There is noth­ing in cry­on­ics which ren­ders it differ­ent from Searle’s wa­ger with ‘de­struc­tive up­load­ing’, be­cause one can always com­mit suicide and then be cry­op­re­served (sym­met­ri­cal with com­mit­ting suicide and then be­ing de­struc­tively scanned /​ com­mit­ting suicide by be­ing de­struc­tively scanned). The asym­me­try ex­ists as a mat­ter of policy: the cry­on­ics or­ga­ni­za­tions re­fuse to take suicides.

Over­all, I agree with the 2 quoted peo­ple; there is a small in­trin­sic philo­soph­i­cal risk to up­load­ing as well as the ob­vi­ous prac­ti­cal risk that it won’t work, and this means up­load­ing does not strictly dom­i­nate life-ex­ten­sion or other ac­tions. But this is not a con­tro­ver­sial point and has already in prac­tice been em­braced by cry­on­i­cists in their analo­gous way (and we can ex­pect any up­load­ing to be ei­ther non-de­struc­tive or post-mortem), and to the ex­tent that Agar thinks that this is a large or over­whelming dis­ad­van­tage for up­load­ing (“It is un­likely to be ra­tio­nal to make an elec­tronic copy of your­self and de­stroy your origi­nal biolog­i­cal brain and body.”), he is in­cor­rect.