I believe the word “consciousness” is used in so many confused and conflicting ways that nobody should mention “consciousness” without clarifying what they mean by it. I will substitute your question with “How should we morally value emulations?”.
Personally, if an emulation behaved like a human in all respects except for physical presence, I would give them the same respect as I give a human, subject to the following qualifications:
I don’t believe multiple emulations with very similar memories should not be treated the same as an equal number of humans.
I don’t believe emulations should be given voting rights unless there is very careful regulation on how they are created; otherwise manufacturers would have perverse incentives. [edit: Actually, what should be regulated is not when they can be created, but when they can be given voting rights.]
Similarly, a careful look at practical considerations must be given before granting emulations other civil rights.
If this situation actually occurs in my lifetime, I would have access to more details on how emulations and society with emulations work. This information may cause me to change my mind.
If emulations behave in noticably different ways from humans, I would seek more information before making judgements.
In particular, according to my current moral intuition, I don’t give an argument of the form “This emulation behaves like just like a human, but it might not actually be conscious” any weight.
I don’t believe emulations should be given voting rights unless there is very careful regulation on how they are created; otherwise manufacturers would have perverse incentives.
Do you in general support regulations on creating things with voting rights, to avoid manufacturers having perverse incentives?
Assuming you’re aiming to refer to creating humans:
It seems to me that there’s a qualitative difference between current methods of creating voters (i.e. childbearing) and creating a whole ton of emulations. Our current methods are distributed, slow, have a long time gap (so time discounting applies to incentives), and there are better options for abuse of authority than breeding new voters. Whereas effectively free creation of human-equivalent ems is fast, centralized, has effectively no time gap, and could easily warp the political context, assuming “political context” still matters in such a world.
But I think even thinking about voting rights for ems is solving the wrong problem. If we as a society determine that ems ought to have the rights and privileges of citizens, but doing so completely breaks democracy as we know it, it is likely that the proper response is not to rearrange voting rights, but to simply replace democracy with Something Else that better fits the new situation.
Democracy isn’t immutable. If it doesn’t work, find something else that does.
Considering your question, I have changed my position. In its current form it applies equally well to both ems and humans. Also, note that careful regulation does not necessarily mean heavy regulation. In fact, heavy regulation has the danger of creating perverse incentives to the regulators.
There are some people that for religious reasons forego birth control causing bigger families. Am I correct in extrapolating that you would find that a child of such parents would have less basis to have their vote counted in equal force?
The regulation wasn’t supposed to be on creating the things, the regulation was supposed to be on giving them the right to vote once they have been created.
I’d suggest that in a situation where it is possible to, for instance, shove a person into a replicator and instantly get a billion copies with a 1 hour lifespan, we should indeed deny such copies voting rights.
Of course, creating doesn’t necessarily mean creating from scratch. Suppose nonresidents cannot vote, residents can vote, and the residency requirement is one hout. You can create residents from nonresidents by bussing them in and waiting. I would support a regulation that did not allow such newly created residents to vote.
I can’t think of any real-life situations where it’s easy enough to create voters that there are any such perverse incentives (real-life cases of bussing in nonresidents are usually just vote fraud).
I believe the word “consciousness” is used in so many confused and conflicting ways that nobody should mention “consciousness” without clarifying what they mean by it.
This is a good point; you’re absolutely right that I should have addressed this issue in the OP. There seems to be broad agreement among people who find consciousness puzzling that Charlmers’ description of the “hard problem” does a pretty good job of specifying where the puzzle lies, regardless of whether they agree with Chalmers’ other views (my impression is that few do).
Unfortunately, that’s doesn’t clarify it for me. I’ve seen descriptions along these lines, and if I thought they were coherent and consistent with each other I would have assumed that’s were referring to the same thing. In particular, this segment is confusing:
If someone says “I can see that you have explained how DNA stores and transmits hereditary information from one generation to the next, but you have not explained how it is a gene”, then they are making a conceptual mistake. All it means to be a gene is to be an entity that performs the relevant storage and transmission function. But if someone says “I can see that you have explained how information is discriminated, integrated, and reported, but you have not explained how it is experienced”, they are not making a conceptual mistake.
It seems to me like someone asking the second question is making a conceptual mistake of exactly the same nature as someone asking the first question.
I believe the word “consciousness” is used in so many confused and conflicting ways that nobody should mention “consciousness” without clarifying what they mean by it. I will substitute your question with “How should we morally value emulations?”.
Personally, if an emulation behaved like a human in all respects except for physical presence, I would give them the same respect as I give a human, subject to the following qualifications:
I don’t believe multiple emulations with very similar memories should not be treated the same as an equal number of humans.
I don’t believe emulations should be given voting rights unless there is very careful regulation on how they are created; otherwise manufacturers would have perverse incentives. [edit: Actually, what should be regulated is not when they can be created, but when they can be given voting rights.]
Similarly, a careful look at practical considerations must be given before granting emulations other civil rights.
If this situation actually occurs in my lifetime, I would have access to more details on how emulations and society with emulations work. This information may cause me to change my mind.
If emulations behave in noticably different ways from humans, I would seek more information before making judgements.
In particular, according to my current moral intuition, I don’t give an argument of the form “This emulation behaves like just like a human, but it might not actually be conscious” any weight.
Do you in general support regulations on creating things with voting rights, to avoid manufacturers having perverse incentives?
Assuming you’re aiming to refer to creating humans:
It seems to me that there’s a qualitative difference between current methods of creating voters (i.e. childbearing) and creating a whole ton of emulations. Our current methods are distributed, slow, have a long time gap (so time discounting applies to incentives), and there are better options for abuse of authority than breeding new voters. Whereas effectively free creation of human-equivalent ems is fast, centralized, has effectively no time gap, and could easily warp the political context, assuming “political context” still matters in such a world.
But I think even thinking about voting rights for ems is solving the wrong problem. If we as a society determine that ems ought to have the rights and privileges of citizens, but doing so completely breaks democracy as we know it, it is likely that the proper response is not to rearrange voting rights, but to simply replace democracy with Something Else that better fits the new situation.
Democracy isn’t immutable. If it doesn’t work, find something else that does.
Considering your question, I have changed my position. In its current form it applies equally well to both ems and humans. Also, note that careful regulation does not necessarily mean heavy regulation. In fact, heavy regulation has the danger of creating perverse incentives to the regulators.
There are some people that for religious reasons forego birth control causing bigger families. Am I correct in extrapolating that you would find that a child of such parents would have less basis to have their vote counted in equal force?
The regulation wasn’t supposed to be on creating the things, the regulation was supposed to be on giving them the right to vote once they have been created.
I’d suggest that in a situation where it is possible to, for instance, shove a person into a replicator and instantly get a billion copies with a 1 hour lifespan, we should indeed deny such copies voting rights.
Of course, creating doesn’t necessarily mean creating from scratch. Suppose nonresidents cannot vote, residents can vote, and the residency requirement is one hout. You can create residents from nonresidents by bussing them in and waiting. I would support a regulation that did not allow such newly created residents to vote.
I can’t think of any real-life situations where it’s easy enough to create voters that there are any such perverse incentives (real-life cases of bussing in nonresidents are usually just vote fraud).
Concur. What I get from this post is “the word is hopelessly confused, and philosophical discussions on the topic are mostly arguments over the word.”
This is a good point; you’re absolutely right that I should have addressed this issue in the OP. There seems to be broad agreement among people who find consciousness puzzling that Charlmers’ description of the “hard problem” does a pretty good job of specifying where the puzzle lies, regardless of whether they agree with Chalmers’ other views (my impression is that few do).
Unfortunately, that’s doesn’t clarify it for me. I’ve seen descriptions along these lines, and if I thought they were coherent and consistent with each other I would have assumed that’s were referring to the same thing. In particular, this segment is confusing:
It seems to me like someone asking the second question is making a conceptual mistake of exactly the same nature as someone asking the first question.