I read this as assuming that all copies deterministically demonstrate absolute allegiance to the collective self. I question that assertion, but have no clear way of proving the argument one way or another. If ‘re-merging’ is possible, mergeable copies intending to merge should probably be treated as a unitary entity rather than individuals for the sake of this discussion.
Ultimately, I read your position as stating that suicide is a human right, but that secure deletion of an individual is not acceptable to prevent ultimate harm to that individual, but is acceptable to prevent harm caused by that individual to others.
This is far from a settled issue, and has analogy in the question ‘should you terminate an uncomplicated preganancy with terminal birth defects?’ Anencephaly is a good example of this situation. The argument presented in the OP is consistent with a ‘yes’, and I read your line of argument as consistent with a clear ‘no’.
I acausally cooperate with agents who I evaluate to be similar to me. That includes most humans, but it includes myself REALLY HARD, and doesn’t include an unborn baby. (because babies are just templates, and the thing that makes them like me is being in the world for a year ish.)
Is your position consistent with effective altruism?
The trap expressed in the OP is essentially a statement that approaching a particular problem involving uploaded consciousness using the framework of effective altruism to drive decision-making led to a perverse (brains in blenders!) incentive. The options at this point are a) the perverse act is not perverse b) effective altruism does not lead to that perverse act c) effective altruism is flawed, try something else (like ‘ideological kin’ selection?)
You are unequivocal about your disinterest in being on the receiving of this brand of altruism, and have also asserted that you cooperate acausally with agents similar to you, (based on degree of similarity?) and previously asserted that an agent who shares the sum total of your life experience, less the most recent year, can be cast aside and destroyed without thought or consequence. So...do I mark you down for option c?
I read this as assuming that all copies deterministically demonstrate absolute allegiance to the collective self. I question that assertion, but have no clear way of proving the argument one way or another. If ‘re-merging’ is possible, mergeable copies intending to merge should probably be treated as a unitary entity rather than individuals for the sake of this discussion.
Ultimately, I read your position as stating that suicide is a human right, but that secure deletion of an individual is not acceptable to prevent ultimate harm to that individual, but is acceptable to prevent harm caused by that individual to others.
This is far from a settled issue, and has analogy in the question ‘should you terminate an uncomplicated preganancy with terminal birth defects?’ Anencephaly is a good example of this situation. The argument presented in the OP is consistent with a ‘yes’, and I read your line of argument as consistent with a clear ‘no’.
Thanks again for the food for thought.
I acausally cooperate with agents who I evaluate to be similar to me. That includes most humans, but it includes myself REALLY HARD, and doesn’t include an unborn baby. (because babies are just templates, and the thing that makes them like me is being in the world for a year ish.)
Is your position consistent with effective altruism?
The trap expressed in the OP is essentially a statement that approaching a particular problem involving uploaded consciousness using the framework of effective altruism to drive decision-making led to a perverse (brains in blenders!) incentive. The options at this point are a) the perverse act is not perverse b) effective altruism does not lead to that perverse act c) effective altruism is flawed, try something else (like ‘ideological kin’ selection?)
You are unequivocal about your disinterest in being on the receiving of this brand of altruism, and have also asserted that you cooperate acausally with agents similar to you, (based on degree of similarity?) and previously asserted that an agent who shares the sum total of your life experience, less the most recent year, can be cast aside and destroyed without thought or consequence. So...do I mark you down for option c?