For what it’s worth, I agreed with your position for years, but changed my opinion after Wei Dai suggested a new argument to me.
Suppose you have an upload saying “I’m conscious”. You start optimizing the program, step by little step, until you get a tiny program that just outputs the string “I’m conscious” without actually being conscious. How can we tell at which point the program lost consciousness? And if we can’t tell, then why are we sure that the process of scanning and uploading a biological brain doesn’t have similar problems? Especially if the uploading is done by an AI who might want to fit more people into the universe.
Moreover, it notices that the slope is slippery at the very very bottom after all introspective capability has been lost, but no argument is provided about the top, AND you’re applying it to a single-step procedure with easy before/after comparison, so we can’t get a boiled-frog effect.
Overeager optimization is a serious concern once digitized, for sure.
The transition is the one in the OP—the digitization process itself, going from meat to, well, not-meat.
You only need to do that once.
The comparison would be by behavior—do they think differently, beyond what you’d expect from differing circumstances? Do they still seem human enough? Unless it is all very sudden, there will be plenty of time to notice inhumanity in the uploads.
Goes doubly if they can be placed in convincing androids, so the circumstances differ as little as possible.
For what it’s worth, I agreed with your position for years, but changed my opinion after Wei Dai suggested a new argument to me.
Suppose you have an upload saying “I’m conscious”. You start optimizing the program, step by little step, until you get a tiny program that just outputs the string “I’m conscious” without actually being conscious. How can we tell at which point the program lost consciousness? And if we can’t tell, then why are we sure that the process of scanning and uploading a biological brain doesn’t have similar problems? Especially if the uploading is done by an AI who might want to fit more people into the universe.
That answers problems 1 and 3, but not problem 2.
Moreover, it notices that the slope is slippery at the very very bottom after all introspective capability has been lost, but no argument is provided about the top, AND you’re applying it to a single-step procedure with easy before/after comparison, so we can’t get a boiled-frog effect.
Overeager optimization is a serious concern once digitized, for sure.
Sorry, what before/after comparison are you thinking of?
The transition is the one in the OP—the digitization process itself, going from meat to, well, not-meat.
You only need to do that once.
The comparison would be by behavior—do they think differently, beyond what you’d expect from differing circumstances? Do they still seem human enough? Unless it is all very sudden, there will be plenty of time to notice inhumanity in the uploads.
Goes doubly if they can be placed in convincing androids, so the circumstances differ as little as possible.