Any ethical theory that depends on demarcating individuals, or “counting people”, appears doomed.
It seems likely that in the future, “individuals” will be constantly forked and merged/discarded as a matter of course. And like forking processes in Unix, such operations will probably make use of copy-on-write memory to save resources. Intuitively it makes little sense to attach a great deal of ethical significance to the concept of “individual” in those circumstances.
Is it time to give up, and start looking for ethical theories that don’t depend on a concept of ’individual”? I’m curious what your thoughts are.
Arguably, the concept of “individual” is incoherent even with ordinary humans, for at least two reasons.
First, one could argue that human brain doesn’t operate as a single agent in any meaningful sense, but instead consists of a whole bunch of different agents struggling to gain control of external behavior—and what we perceive as our stream of consciousness is mostly just delusional confabulation giving rise to the fiction of a unified mind thinking and making decisions. (The topic was touched upon in this LW post and the subsequent discussion.)
Second, it’s questionable whether the concept of personal identity across time is anything more than an arbitrary subjective preference. You believe that a certain entity that is expected to exist tomorrow can be identified as your future self, so you assign it a special value. From the evolutionary perspective, it’s clear why humans have this value, and the concept is more or less coherent assuming the traditional biological constraints on human life, but it completely breaks down once this assumption is relaxed (as discussed in this recent thread). Therefore, one could argue that the idea of an “individual” existing through time has no objective basis to begin with, and the decision to identify entities that exist in different instants of time as the same “individual” can’t be other than a subjective whim.
I haven’t read and thought about these problems enough to form a definite opinion yet, but it seems to me that if we’re really willing to go for a no-holds-barred reductionist approach, they should both be considered very seriously. Trouble is, their implications don’t sound very pleasant.
It strikes me that there’s a somewhat fuzzy continuum in both directions. The concept of a coherent identity is largely a factor of how aligned the interests of the component entities are. This ranges all the way from individual genes or DNA sequences, through cells and sub-agents in the brain, past the individual human and up through family, community, nation, company, religion, species and beyond.
Coalitions of entities with interests that are more aligned will tend to have a stronger sense of identity. Shifting incentives may lead to more or less alignment of interests and so change the boundaries where common identity is perceived. A given entity may form part of more than one overlapping coalition with a recognizable identity and shifting loyalties between coalitions are also significant.
Therefore, one could argue that the idea of an “individual” existing through time has no objective basis to begin with, and the decision to identify entities that exist in different instants of time as the same “individual” can’t be other than a subjective whim.
Evolution may have reasons for making us think this, but how would you get that the identification of an individual existing through time is subjective? You can quite clearly recognize that there is a being of approximately the same composition and configuration in the same location from one moment to the next.
Even (and especially) with the Mach/Barbour view that time as a fundamental coordinate doesn’t exist, you can still identify a persistent individual in that it is the only one with nearly-identical memories to another one at the nearest location in the (indistinguishable-particle based) configuration space. (Barbour calls this the “Machian distinguished simplifier” or “fundamental distance”, and it matches our non-subjective measures of time.)
ETA: See Vladimir_M’s response below; I had misread his comment, thereby criticizing a position he didn’t take. I’ll leave the above unchanged because of its discussion of fundamental distance as a related metric.
You can quite clearly recognize that there is a being of approximately the same composition and configuration in the same location from one moment to the next.
That’s why I wrote that “the concept [of personal identity] is more or less coherent assuming the traditional biological constraints on human life.” It falls apart when we start considering various transhuman scenarios where our basic intuitions no longer hold, and various intuition pump arguments provide conflicting results.
Arguably, some of the standard arguments that come into play when we discuss these issues also have the effect that once they’ve been considered seriously, our basic intuitions about our normal biological existence also start to seem arbitrary, even though they’re clearly defined and a matter of universal consensus within the range of our normal everyday experiences.
Point taken, I misread you as saying that our intuitions were arbitrary specifically in the case of traditional biological life, not just when they try to generalize outside this “training set”. Sorry!
On the other hand, one could say that human brain can be described as a collection of interconnected subsystems, acting more or less coherently and coordinated by neural activity, that one perceive as stream of consciousness. Thus, stream of consciousness can be seen as a unifying tool, which allows to treat human brain activity as single agent operation. This point of view, while remaning reductionist-compatible, allows to reinforce perception of self as a real acting agent, thus, hopefully, reinforcing underlying neural coordination activity and making brain/oneself more effective.
I’ll be convinced that personal identity is a subjective preference, if one can explain strange coincidence: only “tomorrow I” will have those few terabytes of my memories.
Therefore, one could argue that the idea of an “individual” existing through time has no objective basis to begin with, and the decision to identify entities that exist in different instants of time as the same “individual” can’t be other than a subjective whim.
That’s roughly my current view. Two minor points. I think “whim” may overstate the point. An instinct cooked into us by millions of years of evolution isn’t what I’d call “whim”. Also, “subjective” seems to presuppose the very subject whose reality is being questioned.
Is it ethical to forcefully merge with one’s own copy? To create unconscious copy and use it as slave/whatever? To forcefully create someone’s copies? To discard one’s own copy?
Why would conscious agent be less significant if it can split/merge at will? Of course, voting will be meaningless, but is it reasonable do drop all ethics?
With regard to your own copies, it’s more a matter of practicality than ethics. Before you make your first copy, while you’re still uncertain about which copy you’ll be, you should come up with detailed rules about how you want your instances to treat each other, then precommit to follow them. That way, none of your copies can ever force another to do anything without one of them breaking a precommitment.
What if second copy experiences that “click” moment, which makes his/her goals diverge, and he/she is unable to convince first copy to break precommitment on merging or to inflict this “click” moment on first copy?
Any ethical theory that depends on demarcating individuals, or “counting people”, appears doomed.
It seems likely that in the future, “individuals” will be constantly forked and merged/discarded as a matter of course. And like forking processes in Unix, such operations will probably make use of copy-on-write memory to save resources. Intuitively it makes little sense to attach a great deal of ethical significance to the concept of “individual” in those circumstances.
Is it time to give up, and start looking for ethical theories that don’t depend on a concept of ’individual”? I’m curious what your thoughts are.
Arguably, the concept of “individual” is incoherent even with ordinary humans, for at least two reasons.
First, one could argue that human brain doesn’t operate as a single agent in any meaningful sense, but instead consists of a whole bunch of different agents struggling to gain control of external behavior—and what we perceive as our stream of consciousness is mostly just delusional confabulation giving rise to the fiction of a unified mind thinking and making decisions. (The topic was touched upon in this LW post and the subsequent discussion.)
Second, it’s questionable whether the concept of personal identity across time is anything more than an arbitrary subjective preference. You believe that a certain entity that is expected to exist tomorrow can be identified as your future self, so you assign it a special value. From the evolutionary perspective, it’s clear why humans have this value, and the concept is more or less coherent assuming the traditional biological constraints on human life, but it completely breaks down once this assumption is relaxed (as discussed in this recent thread). Therefore, one could argue that the idea of an “individual” existing through time has no objective basis to begin with, and the decision to identify entities that exist in different instants of time as the same “individual” can’t be other than a subjective whim.
I haven’t read and thought about these problems enough to form a definite opinion yet, but it seems to me that if we’re really willing to go for a no-holds-barred reductionist approach, they should both be considered very seriously. Trouble is, their implications don’t sound very pleasant.
It strikes me that there’s a somewhat fuzzy continuum in both directions. The concept of a coherent identity is largely a factor of how aligned the interests of the component entities are. This ranges all the way from individual genes or DNA sequences, through cells and sub-agents in the brain, past the individual human and up through family, community, nation, company, religion, species and beyond.
Coalitions of entities with interests that are more aligned will tend to have a stronger sense of identity. Shifting incentives may lead to more or less alignment of interests and so change the boundaries where common identity is perceived. A given entity may form part of more than one overlapping coalition with a recognizable identity and shifting loyalties between coalitions are also significant.
Evolution may have reasons for making us think this, but how would you get that the identification of an individual existing through time is subjective? You can quite clearly recognize that there is a being of approximately the same composition and configuration in the same location from one moment to the next.
Even (and especially) with the Mach/Barbour view that time as a fundamental coordinate doesn’t exist, you can still identify a persistent individual in that it is the only one with nearly-identical memories to another one at the nearest location in the (indistinguishable-particle based) configuration space. (Barbour calls this the “Machian distinguished simplifier” or “fundamental distance”, and it matches our non-subjective measures of time.)
ETA: See Vladimir_M’s response below; I had misread his comment, thereby criticizing a position he didn’t take. I’ll leave the above unchanged because of its discussion of fundamental distance as a related metric.
SilasBarta:
That’s why I wrote that “the concept [of personal identity] is more or less coherent assuming the traditional biological constraints on human life.” It falls apart when we start considering various transhuman scenarios where our basic intuitions no longer hold, and various intuition pump arguments provide conflicting results.
Arguably, some of the standard arguments that come into play when we discuss these issues also have the effect that once they’ve been considered seriously, our basic intuitions about our normal biological existence also start to seem arbitrary, even though they’re clearly defined and a matter of universal consensus within the range of our normal everyday experiences.
Point taken, I misread you as saying that our intuitions were arbitrary specifically in the case of traditional biological life, not just when they try to generalize outside this “training set”. Sorry!
On the other hand, one could say that human brain can be described as a collection of interconnected subsystems, acting more or less coherently and coordinated by neural activity, that one perceive as stream of consciousness. Thus, stream of consciousness can be seen as a unifying tool, which allows to treat human brain activity as single agent operation. This point of view, while remaning reductionist-compatible, allows to reinforce perception of self as a real acting agent, thus, hopefully, reinforcing underlying neural coordination activity and making brain/oneself more effective.
I’ll be convinced that personal identity is a subjective preference, if one can explain strange coincidence: only “tomorrow I” will have those few terabytes of my memories.
That’s roughly my current view. Two minor points. I think “whim” may overstate the point. An instinct cooked into us by millions of years of evolution isn’t what I’d call “whim”. Also, “subjective” seems to presuppose the very subject whose reality is being questioned.
I think there are significant ethical situations.
Is it ethical to forcefully merge with one’s own copy? To create unconscious copy and use it as slave/whatever? To forcefully create someone’s copies? To discard one’s own copy?
Why would conscious agent be less significant if it can split/merge at will? Of course, voting will be meaningless, but is it reasonable do drop all ethics?
With regard to your own copies, it’s more a matter of practicality than ethics. Before you make your first copy, while you’re still uncertain about which copy you’ll be, you should come up with detailed rules about how you want your instances to treat each other, then precommit to follow them. That way, none of your copies can ever force another to do anything without one of them breaking a precommitment.
What if second copy experiences that “click” moment, which makes his/her goals diverge, and he/she is unable to convince first copy to break precommitment on merging or to inflict this “click” moment on first copy?
More importantly, does it count as gay or masturbation if you have sex with your copy?
I have a plan for a LW post on the subject, although I don’t know when I’ll get around to it.