I still have trouble biting that bullet for some reason. Maybe I’m naive, I know, but there’s a sense in which I just can’t seem to let go of the question, “What will I see happen next?” I strive for altruism, but I’m not sure I can believe that subjective selfishness—caring about your own future experiences—is an incoherent utility function; that we are forced to be Buddhists who dare not cheat a neighbor, not because we are kind, but because we anticipate experiencing their consequences just as much as we anticipate experiencing our own. I don’t think that, if I were really selfish, I could jump off a cliff knowing smugly that a different person would experience the consequence of hitting the ground.
I don’t really understand your reasoning here. It’s not a different person that will experience the consequences of hitting the ground, it’s Eliezer+5. Sure, Eliezer+5 is not identical to Eliezer, but he’s really, really, really similar. If Eliezer is selfish, it makes perfect sense to care about Eliezer+5 too, and no sense at all to care equally about Furcas+5, who is really different from Eliezer.
Suppose I’m duplicated, and both copies are told that one of us will be thrown off a cliff. While it makes some kind of sense for Copy 1 to be indifferent (or nearly indifferent) to whether he or Copy 2 gets tossed, that’s not what would actually occur. Copy 1 would probably prefer that Copy 2 gets tossed (as a first-order thing; Copy 1′s morals might well tell him that if he can affect the choice, he ought to prefer getting tossed to seeing Copy 2 getting tossed; but in any case we’re far from mere indifference).
There’s something to “concern for my future experience” that is distinct from concern for experiences of beings very like me.
I have the same instincts, and I would have a very hard time overriding them, were my copy and I put in the situation you described, but those instincts are wrong.
These instincts are only maladapted for situations found in very contrived thought experiments. For example, you have to assume that Copy 1 can inspect Copy 2′s source code. Otherwise she could be tricked into believing that she has an identical copy. (What a stupid way to die.) I think our intuitions are already failing us when we try to imagine such source code inspections. (To put it another way: we have very few in common with agents that can do such things.)
For example, you have to assume that Copy 1 can inspect Copy 2′s source code.
It would suffice, instead, to have strong evidence that the copying process is trustworthy; in the limit as the evidence approaches certainty, the more adaptive instinct would approach indifference between the cases.
good thought experiment, but I actually would be indifferent, as long as I actually believed that my copy was genuine, and wouldn’t be thrown off a cliff. Unfortunately I can’t actually imagine any evidence that would convince me of this. I wonder if that’s the source of your reservations too—if the reason you imagine Copy 1 caring is because you can’t imagine Copy 1 being convinced of the scenario.
I don’t really understand your reasoning here. It’s not a different person that will experience the consequences of hitting the ground, it’s Eliezer+5. Sure, Eliezer+5 is not identical to Eliezer, but he’s really, really, really similar. If Eliezer is selfish, it makes perfect sense to care about Eliezer+5 too, and no sense at all to care equally about Furcas+5, who is really different from Eliezer.
Suppose I’m duplicated, and both copies are told that one of us will be thrown off a cliff. While it makes some kind of sense for Copy 1 to be indifferent (or nearly indifferent) to whether he or Copy 2 gets tossed, that’s not what would actually occur. Copy 1 would probably prefer that Copy 2 gets tossed (as a first-order thing; Copy 1′s morals might well tell him that if he can affect the choice, he ought to prefer getting tossed to seeing Copy 2 getting tossed; but in any case we’re far from mere indifference).
There’s something to “concern for my future experience” that is distinct from concern for experiences of beings very like me.
I have the same instincts, and I would have a very hard time overriding them, were my copy and I put in the situation you described, but those instincts are wrong.
Emend “wrong” to “maladapted for the situation” and I’ll agree.
These instincts are only maladapted for situations found in very contrived thought experiments. For example, you have to assume that Copy 1 can inspect Copy 2′s source code. Otherwise she could be tricked into believing that she has an identical copy. (What a stupid way to die.) I think our intuitions are already failing us when we try to imagine such source code inspections. (To put it another way: we have very few in common with agents that can do such things.)
It would suffice, instead, to have strong evidence that the copying process is trustworthy; in the limit as the evidence approaches certainty, the more adaptive instinct would approach indifference between the cases.
good thought experiment, but I actually would be indifferent, as long as I actually believed that my copy was genuine, and wouldn’t be thrown off a cliff. Unfortunately I can’t actually imagine any evidence that would convince me of this. I wonder if that’s the source of your reservations too—if the reason you imagine Copy 1 caring is because you can’t imagine Copy 1 being convinced of the scenario.