Suppose it were possible to “upload” a human brain to a computer, and thereafter
predict the brain unlimited accuracy. Who cares? Why should anyone even worry
that that would create a problem for free will or personal identity?
[...]
If any of these technologies—brain-uploading, teleportation, the
Newcomb predictor, etc.—were actually realized, then all sorts of “woolly metaphysical questions”
about personal identity and free will would start to have practical consequences. Should you fax
yourself to Mars or not? Sitting in the hospital room, should you bet that the coin landed heads
or tails? Should you expect to “wake up” as one of your backup copies, or as a simulation being
run by the Newcomb Predictor? These questions all seem “empirical,” yet one can’t answer them
without taking an implicit stance on questions that many people would prefer to regard as outside
the scope of science.
[...]
I’m against any irreversible destruction of
knowledge, thoughts, perspectives, adaptations, or ideas, except possibly by their owner. Such
destruction is worse the more valuable the thing destroyed, the longer it took to create, and the
harder it is to replace. From this basic revulsion to irreplaceable loss, hatred of murder, genocide,
the hunting of endangered species to extinction, and even (say) the burning of the Library of
Alexandria can all be derived as consequences.
Now, what about the case of “deleting” an emulated human brain from a computer memory?
The same revulsion applies in full force—if the copy deleted is the last copy in existence. If, however,
there are other extant copies, then the deleted copy can always be “restored from backup,” so
deleting it seems at worst like property damage. For biological brains, by contrast, whether such
backup copies can be physically created is of course exactly what’s at issue, and the freebit picture
conjectures a negative answer.
Scott Aaronson touched on this issue in his speculative writeup The Ghost in the Quantum Turing Machine: