Consider a fully deterministic conscious simulation of a person. There are two possible futures, one where that simulation is run once, and another where the simulation is run twice simultaneously in lockstep, with the exact same parameterization and environment. Do these worlds have different moral values?
I ask because...
initially I would have said no, probably not, these are identically the same person, so there is only one instance actually there, but...
Consider a fully deterministic conscious simulation of a person. There are two possible futures, one where that simulation is run once, and another where the simulation is also run once, but with the future having twice the probability mass. Do these worlds have different moral values?
to which the answer must surely be yes, else it’s really hard to have coherent moral values under quantum mechanics, hence the contradiction.
First I want to make sure we’re splitting off the personal from the aesthetic here. By “the aesthetic,” I mean the moral value from a truly outside perspective—like asking the question “if I got to design the universe, which way would I rather it be?” You don’t anticipate being this person, you just like people from an aesthetic standpoint and want your universe to have some. For this type of preference, you can prefer the universe to be however you’d like (:P) including larger vs. smaller computers.
The only model which I’ve come across which seems like it handles this type of thought experiment without breaking is UDASSA.
Consider a computer which is 2 atoms thick running a simulation of you. Suppose this computer can be divided down the middle into two 1 atom thick computers which would both run the same simulation independently. We are faced with an unfortunate dichotomy: either the 2 atom thick simulation has the same weight as two 1 atom thick simulations put together, or it doesn’t.
In the first case, we have to accept that some computer simulations count for more, even if they are running the same simulation (or we have to de-duplicate the set of all experiences, which leads to serious problems with Boltzmann machines). In this case, we are faced with the problem of comparing different substrates, and it seems impossible not to make arbitrary choices.
In the second case, we have to accept that the operation of dividing the 2 atom thick computer has moral value, which is even worse. Where exactly does the transition occur? What if each layer of the 2 atom thick computer can run independently before splitting? Is physical contact really significant? What about computers that aren’t physically coherent? What two 1 atom thick computers periodically synchronize themselves and self-destruct if they aren’t synchronized: does this synchronization effectively destroy one of the copies? I know of no way to accept this possibility without extremely counter-intuitive consequences.
UDASSA implies that simulations on the 2 atom thick computer count for twice as much as simulations on the 1 atom thick computer, because they are easier to specify. Given a description of one of the 1 atom thick computers, then there are two descriptions of equal complexity that point to the simulation running on the 2 atom thick computer: one description pointing to each layer of the 2 atom thick computer. When a 2 atom thick computer splits, the total number of descriptions pointing to the experience it is simulating doesn’t change.
Thanks, I figured this wouldn’t be a new question. UDASSA seems quite unsatisfying (I have no formal argument for that claim) but the perspective is nice. I appreciate the pointer :).
Consider a fully deterministic conscious simulation of a person. There are two possible futures, one where that simulation is run once, and another where the simulation is run twice simultaneously in lockstep, with the exact same parameterization and environment. Do these worlds have different moral values?
I ask because...
initially I would have said no, probably not, these are identically the same person, so there is only one instance actually there, but...
Consider a fully deterministic conscious simulation of a person. There are two possible futures, one where that simulation is run once, and another where the simulation is also run once, but with the future having twice the probability mass. Do these worlds have different moral values?
to which the answer must surely be yes, else it’s really hard to have coherent moral values under quantum mechanics, hence the contradiction.
First I want to make sure we’re splitting off the personal from the aesthetic here. By “the aesthetic,” I mean the moral value from a truly outside perspective—like asking the question “if I got to design the universe, which way would I rather it be?” You don’t anticipate being this person, you just like people from an aesthetic standpoint and want your universe to have some. For this type of preference, you can prefer the universe to be however you’d like (:P) including larger vs. smaller computers.
Second is the personal question. If the person being simulated is me, what would I prefer? I resolved these questions to my own satisfaction in Anthropic Selfish Preferences as a Modification of TDT ( https://www.lesswrong.com/posts/gTmWZEu3CcEQ6fLLM/treating-anthropic-selfish-preferences-as-an-extension-of ), but I’m not sure how helpful that post actually is for sharing insight.
The only model which I’ve come across which seems like it handles this type of thought experiment without breaking is UDASSA.
Thanks, I figured this wouldn’t be a new question. UDASSA seems quite unsatisfying (I have no formal argument for that claim) but the perspective is nice. I appreciate the pointer :).