Since morality is subjective, then don’t the morals change depending upon what part of this scenario you are in (inside/outside)?
I operate from the perspective (incidentally, I like the term ‘modal immortality’) that my own continued existence is inevitable; the only thing that changes is the possibility distribution of contexts and ambiguities. By shutting down 99⁄100 instances, you are more affecting your own experience with the simulations than their’s with you (if the last one goes, too, then you can no longer interact with it), especially if, inside a simulation, other external contexts are also possible.
If you start out with a fixed “agent design” and then move it around, then preferences for the world by that agent will depend on where you put it. But given an agent embedded in the world in a certain way, it will prefer its preferences to tell the same story if you move it around (if it gets the chance of knowing about the fact that you move it around and how).
Since morality is subjective, then don’t the morals change depending upon what part of this scenario you are in (inside/outside)?
I operate from the perspective (incidentally, I like the term ‘modal immortality’) that my own continued existence is inevitable; the only thing that changes is the possibility distribution of contexts and ambiguities. By shutting down 99⁄100 instances, you are more affecting your own experience with the simulations than their’s with you (if the last one goes, too, then you can no longer interact with it), especially if, inside a simulation, other external contexts are also possible.
If you start out with a fixed “agent design” and then move it around, then preferences for the world by that agent will depend on where you put it. But given an agent embedded in the world in a certain way, it will prefer its preferences to tell the same story if you move it around (if it gets the chance of knowing about the fact that you move it around and how).