I don’t think a computer program can have any moral value, therefore, without the presence of a soul, people also have no moral value.
It’s hard to build intuitions about the moral value of intelligent programs right now, because there aren’t any around to talk to. But consider a hypothetical that’s as close to human as possible: uploads. Suppose someone you knew decided to undergo a procedure where his brain would be scanned and destroyed, and then a program based on that scan was installed on a humanoid robot body, so that it would act and think like he did; and when you talked to the robot, he told you that he still felt like the same person. Would that robot and the software on it have moral value?
It is interesting that HopeFox’s intuitions rebel at assigning moral worth to something that is easily copied. I think she is on to something. The pets and Chang-software-objects which acquire moral worth do so by long acquaintance with the bestower of worth. In fact, my intuitions do the same with the humans whom I value.
I agree that HopeFox is onto something there: most people think great works of art, or unique features of the natural world have value, but that has nothing to do with having a soul...it has to do with irredicubility.An atom-by-atom duplicate oft the Mona Lisa wouldl, not be the Mona Lisa, it would be a great work of science...
Well, it has nothing to do with what you think of as a ‘soul’.
Personally, I’m not that taken with the local tendency to demand that any problematic word be tabooed. But I think that it might have been worthwhile to make that demand of HopeFox when she first used the word ‘soul’.
Given my own background, I immediately attached a connotation of immortality upon seeing the word. And for that reason, I was puzzled at the conflation of moral worth with possession of a soul. Because my intuition tells me I should be more respectful of something that I might seriously damage than of someone that can survive anything I might do to it.
I agree, intuition is very difficult here. In this specific scenario, I’d lean towards saying yes—it’s the same person with a physically different body and brain, so I’d like to think that there is some continuity of the “person” in that situation. My brain isn’t made of the “same atoms” it was when I was born, after all. So I’d say yes. In fact, in practice, I would definitely assume said robot and software to have moral value, even if I wasn’t 100% sure.
However, if the original brain and body weren’t destroyed, and we now had two apparently identical individuals claiming to be people worthy of moral respect, then I’d be more dubious. I’d be extremely dubious of creating twenty robots running identical software (which seems entirely possible with the technology we’re supposing) and assigning them the moral status of twenty people. “People”, of the sort deserving of rights and dignity and so forth, shouldn’t be the sort of thing that can be arbitrarily created through a mechanical process. (And yes, human reproduction and growth is a mechanical process, so there’s a problem there too.)
Actually, come to think of it… if you have two copies of software (either electronic or neuron-based) running on two separate machines, but it’s the same software, could they be considered the same person? After all, they’ll make all the same decisions given similar stimuli, and thus are using the same decision process.
Yes, the consensus seems to be that running two copies of yourself in parallel doesn’t give you more measure or moral weight. But if the copies receive diferent inputs, they’ll eventually (frantic handwaving) diverge into two different people who both matter. (Maybe when we can’t retrieve Copy-A’s current state from Copy-B’s current state and the respective inputs, because information about the initial state has been destroyed?)
Have you read the quantum physics sequence? Would you agree with me that nothing you learn about seemingly unrelated topics like QM should have the power to destroy the whole basis of your morality?
It’s hard to build intuitions about the moral value of intelligent programs right now, because there aren’t any around to talk to. But consider a hypothetical that’s as close to human as possible: uploads. Suppose someone you knew decided to undergo a procedure where his brain would be scanned and destroyed, and then a program based on that scan was installed on a humanoid robot body, so that it would act and think like he did; and when you talked to the robot, he told you that he still felt like the same person. Would that robot and the software on it have moral value?
I would have suggested pets. Or the software objects of Chang’s story.
It is interesting that HopeFox’s intuitions rebel at assigning moral worth to something that is easily copied. I think she is on to something. The pets and Chang-software-objects which acquire moral worth do so by long acquaintance with the bestower of worth. In fact, my intuitions do the same with the humans whom I value.
I agree that HopeFox is onto something there: most people think great works of art, or unique features of the natural world have value, but that has nothing to do with having a soul...it has to do with irredicubility.An atom-by-atom duplicate oft the Mona Lisa wouldl, not be the Mona Lisa, it would be a great work of science...
Well, it has nothing to do with what you think of as a ‘soul’.
Personally, I’m not that taken with the local tendency to demand that any problematic word be tabooed. But I think that it might have been worthwhile to make that demand of HopeFox when she first used the word ‘soul’.
Given my own background, I immediately attached a connotation of immortality upon seeing the word. And for that reason, I was puzzled at the conflation of moral worth with possession of a soul. Because my intuition tells me I should be more respectful of something that I might seriously damage than of someone that can survive anything I might do to it.
I agree, intuition is very difficult here. In this specific scenario, I’d lean towards saying yes—it’s the same person with a physically different body and brain, so I’d like to think that there is some continuity of the “person” in that situation. My brain isn’t made of the “same atoms” it was when I was born, after all. So I’d say yes. In fact, in practice, I would definitely assume said robot and software to have moral value, even if I wasn’t 100% sure.
However, if the original brain and body weren’t destroyed, and we now had two apparently identical individuals claiming to be people worthy of moral respect, then I’d be more dubious. I’d be extremely dubious of creating twenty robots running identical software (which seems entirely possible with the technology we’re supposing) and assigning them the moral status of twenty people. “People”, of the sort deserving of rights and dignity and so forth, shouldn’t be the sort of thing that can be arbitrarily created through a mechanical process. (And yes, human reproduction and growth is a mechanical process, so there’s a problem there too.)
Actually, come to think of it… if you have two copies of software (either electronic or neuron-based) running on two separate machines, but it’s the same software, could they be considered the same person? After all, they’ll make all the same decisions given similar stimuli, and thus are using the same decision process.
Yes, the consensus seems to be that running two copies of yourself in parallel doesn’t give you more measure or moral weight. But if the copies receive diferent inputs, they’ll eventually (frantic handwaving) diverge into two different people who both matter. (Maybe when we can’t retrieve Copy-A’s current state from Copy-B’s current state and the respective inputs, because information about the initial state has been destroyed?)
Have you read the quantum physics sequence? Would you agree with me that nothing you learn about seemingly unrelated topics like QM should have the power to destroy the whole basis of your morality?