Suppose the copy is not a perfect replication. Suppose you can emulate your brain with 90% accuracy cheaply and that each percentage increase in accuracy requires 2X more computing power. This makes the issue not one of “magical thinking” (supposing that a software (not substrate) replica that is 100% accurate is different from the exact same software on a different substrate) to a question of whether a simulation that is 90% accurate is “good enough”.
Of course, “good enough” is a vague phrase, so it’s necessary to determine how we should evaluate the quality of a replica. I can think of a few off the top of my head (speed of emulation, similarity of behavioral responses to similar situations). It certainly makes for some puzzling philosophy over the nature of identity.
Suppose the copy is not a perfect replication. Suppose you can emulate your brain with 90% accuracy cheaply and that each percentage increase in accuracy requires 2X more computing power. This makes the issue not one of “magical thinking” (supposing that a software (not substrate) replica that is 100% accurate is different from the exact same software on a different substrate) to a question of whether a simulation that is 90% accurate is “good enough”.
Of course, “good enough” is a vague phrase, so it’s necessary to determine how we should evaluate the quality of a replica. I can think of a few off the top of my head (speed of emulation, similarity of behavioral responses to similar situations). It certainly makes for some puzzling philosophy over the nature of identity.