“Ray Kurzweil and Uploading: Just Say No!”, Nick Agar

A new paper has gone up in the November 2011 JET: “Ray Kurzweil and Uploading: Just Say No!” (videos) by Nick Agar (Wikipedia); abstract:

There is a debate about the possibility of mind-uploading – a process that purportedly transfers human minds and therefore human identities into computers. This paper bypasses the debate about the metaphysics of mind-uploading to address the rationality of submitting yourself to it. I argue that an ineliminable risk that mind-uploading will fail makes it prudentially irrational for humans to undergo it.


The argument is a variant of Pascal’s wager he calls Searle’s wager. As far as I can tell, the paper contains mostly ideas he has already written on in his book; from Michael Hauskeller’s review of Agar’s Humanity’s End: Why We Should Reject Radical Enhancement

Starting with Kurzweil, he gives a detailed account of the latter’s “Law of Accelerating Returns” and the ensuing techno-optimism, which leads Kurzweil to believe that we will eventually be able to get rid of our messy bodies and gain virtual immortality by uploading ourselves into a computer. The whole idea is ludicrous, of course, but Agar takes it quite seriously and tries hard to convince us that “it may take longer than Kurzweil thinks for us to know enough about the human brain to successfully upload it” (45) – as if this lack of knowledge was the main obstacle to mind-uploading. Agar’s principal objection, however, is that it will always be irrational for us to upload our minds onto computers, because we will never be able to completely rule out the possibility that, instead of continuing to live, we will simply die and be replaced by something that may be conscious or unconscious, but in any case is not identical with us. While this is certainly a reasonable objection, the way Agar presents it is rather odd. He takes Pascal’s ‘Wager’ (which was designed to convince us that believing in God is always the rational thing to do, because by doing so we have little to lose and a lot to win) and refashions it so that it appears irrational to upload one’s mind, because the procedure might end in death, whereas refusing to upload will keep us alive and is hence always a safe bet. The latter conclusion does not work, of course, since the whole point of mind-uploading is to escape death (which is unavoidable as long as we are stuck with our mortal, organic bodies). Agar argues, however, that by the time we are able to upload minds to computers, other life extension technologies will be available, so that uploading will no longer be an attractive option. This seems to be a curiously techno-optimistic view to take.

John Danaher (User:JohnD) examines the wager, as expressed in the book, further in 2 blog posts:

  1. “Should we Upload Our Minds? Agar on Searle’s Wager (Part One)”

  2. “Should we Upload Our Minds? Agar on Searle’s Wager (Part Two)”

After laying out what seems to be Agar’s argument, Danaher constructs the game-theoretic tree and continues the criticism above:

The initial force of the Searlian Wager derives from recognising the possibility that Weak AI is true. For if Weak AI is true, the act of uploading would effectively amount to an act of self-destruction. But recognising the possibility that Weak AI is true is not enough to support the argument. Expected utility calculations can often have strange and counterintuitive results. To know what we should really do, we have to know whether the following inequality really holds (numbering follows part one):

  • (6) Eu(~U) > Eu(U)

But there’s a problem: we have no figures to plug into the relevant equations, and even if we did come up with figures, people would probably dispute them (“You’re underestimating the benefits of uploading”, “You’re underestimating the costs of uploading” etc. etc.). So what can we do? Agar employs an interesting strategy. He reckons that if he can show that the following two propositions hold true, he can defend (6).
  • (8) Death (outcome c) is much worse for those considering to upload than living (outcome b or d).

  • (9) Uploading and surviving (a) is not much better, and possibly worse, than not uploading and living (b or d).

2. A Fate Worse than Death?
On the face of it, (8) seems to be obviously false. There would appear to be contexts in which the risk of self-destruction does not outweigh the potential benefit (however improbable) of continued existence. Such a context is often exploited by the purveyors of cryonics. It looks something like this:

You have recently been diagnosed with a terminal illness. The doctors say you’ve got six months to live, tops. They tell you to go home, get your house in order, and prepare to die. But you’re having none of it. You recently read some adverts for a cryonics company in California. For a fee, they will freeze your disease-ridden body (or just the brain!) to a cool −196 C and keep it in storage with instructions that it only be thawed out at such a time when a cure for your illness has been found. What a great idea, you think to yourself. Since you’re going to die anyway, why not take the chance (make the bet) that they’ll be able to resuscitate and cure you in the future? After all, you’ve got nothing to lose.

This is a persuasive argument. Agar concedes as much. But he thinks the wager facing our potential uploader is going to be crucially different from that facing the cryonics patient. The uploader will not face the choice between certain death, on the one hand, and possible death/​possible survival, on the other. No; the uploader will face the choice between continued biological existence with biological enhancements, on the one hand, and possible death/​possible survival (with electronic enhancements), on the other.

The reason has to do with the kinds of technological wonders we can expect to have developed by the time we figure out how to upload our minds. Agar reckons we can expect such wonders to allow for the indefinite continuance of biological existence. To support his point, he appeals to the ideas of Aubrey de Grey. de Grey thinks that—given appropriate funding—medical technologies could soon help us to achieve longevity escape velocity (LEV). This is when new anti-aging therapies consistently add years to our life expectancies faster than age consumes them.

If we do achieve LEV, and we do so before we achieve uploadability, then premise (8) would seem defensible. Note that this argument does not actually require LEV to be highly probable. It only requires it to be relatively more probable than the combination of uploadability and Strong AI.
...3. Don’t you want Wikipedia on the Brain?
Premise (9) is a little trickier. It proposes that the benefits of continued biological existence are not much worse (and possibly better) than the benefits of Kurweil-ian uploading. How can this be defended? Agar provides us with two reasons.

The first relates to the disconnect between our subjective perception of value and the objective reality. Agar points to findings in experimental economics that suggest we have a non-linear appreciation of value. I’ll just quote him directly since he explains the point pretty well:

For most of us, a prize of $100,000,000 is not 100 times better than one of $1,000,000. We would not trade a ticket in a lottery offering a one-in-ten chance of winning $1,000,000 for one that offers a one-in-a-thousand chance of winning $100,000,000, even when informed that both tickets yield an expected return of $100,000....We have no difficulty in recognizing the bigger prize as better than the smaller one. But we don’t prefer it to the extent that it’s objectively...The conversion of objective monetary values into subjective benefits reveals the one-in-ten chance at $1,000,000 to be significantly better than the one-in-a-thousand chance at $100,000,000 (pp. 68-69).

How do these quirks of subjective value affect the wager argument? Well, the idea is that continued biological existence with LEV is akin to the one-in-ten chance of $1,000,000, while uploading is akin to the one-in-a-thousand chance of $100,000,000: people are going to prefer the former to the latter, even if the latter might yield the same (or even a higher) payoff.

I have two concerns about this. First, my original formulation of the wager argument relied on the straightforward expected-utility-maximisation-principle of rational choice. But by appealing to the risks associated with the respective wagers, Agar would seem to be incorporating some element of risk aversion into his preferred rationality principle. This would force a revision of the original argument (premise 5 in particular), albeit one that works in Agar’s favour. Second, the use of subjective valuations might affect our interpretation of the argument. In particular it raises the question: Is Agar saying that this is how people will in fact react to the uploading decision, or is he saying that this is how they should react to the decision?

One point is worth noting: the asymmetry of uploading with cryonics is deliberate. There is nothing in cryonics which renders it different from Searle’s wager with ‘destructive uploading’, because one can always commit suicide and then be cryopreserved (symmetrical with committing suicide and then being destructively scanned /​ committing suicide by being destructively scanned). The asymmetry exists as a matter of policy: the cryonics organizations refuse to take suicides.

Overall, I agree with the 2 quoted people; there is a small intrinsic philosophical risk to uploading as well as the obvious practical risk that it won’t work, and this means uploading does not strictly dominate life-extension or other actions. But this is not a controversial point and has already in practice been embraced by cryonicists in their analogous way (and we can expect any uploading to be either non-destructive or post-mortem), and to the extent that Agar thinks that this is a large or overwhelming disadvantage for uploading (“It is unlikely to be rational to make an electronic copy of yourself and destroy your original biological brain and body.”), he is incorrect.