I will try completely different explanation. For example, I die, but in future I will be resurrected by strong AI as exact copy of me. If I think that personal identity is information, I should be happy about it.
Let now assume that 10 copies of me exist in ten planets and all of them die, all the same way. The same future Ai may think that it will be enough to create only one copy of me to resurrect all dead copies. Now it is more similar to QI.
If we have many copies of compact disk with Windows95 and many of them destroyed, it doesn’t matter if one disk still exist.
So, first of all, if only one copy exists then any given misfortune is more likely to wipe out every last one of me than if ten copies exist. Aside from that, I think it’s correct that I shouldn’t much care now how many of me there are—i.e., what measure worlds like the one I’m in have relative to some predecessor.
But there’s a time-asymmetry here: I can still care (and do) about the measure of future worlds with me in is, relative to the one I’m in now. (Because I can influence “successors” of where-I-am-now but not “predecessors”. The point of caring about things is to help you influence them.)
It looks like that we are close to conclusion that QI mainly put difference between “egocentric” and “altruistic” goal systems.
The most interesting question is: where is the border between them? If I like my hand, is it part of me or of external world?
There is also interesting analogy with virus behavior. A virus seems to be interested in existing of his remote copies, with which it may have no any casual connections, because they will continue to replicate. (Altruistic genes do the same, if they exist after all). So egoistic behaviour here is altruistic to another copies of the virus.
I will try completely different explanation. For example, I die, but in future I will be resurrected by strong AI as exact copy of me. If I think that personal identity is information, I should be happy about it.
Let now assume that 10 copies of me exist in ten planets and all of them die, all the same way. The same future Ai may think that it will be enough to create only one copy of me to resurrect all dead copies. Now it is more similar to QI.
If we have many copies of compact disk with Windows95 and many of them destroyed, it doesn’t matter if one disk still exist.
So, first of all, if only one copy exists then any given misfortune is more likely to wipe out every last one of me than if ten copies exist. Aside from that, I think it’s correct that I shouldn’t much care now how many of me there are—i.e., what measure worlds like the one I’m in have relative to some predecessor.
But there’s a time-asymmetry here: I can still care (and do) about the measure of future worlds with me in is, relative to the one I’m in now. (Because I can influence “successors” of where-I-am-now but not “predecessors”. The point of caring about things is to help you influence them.)
It looks like that we are close to conclusion that QI mainly put difference between “egocentric” and “altruistic” goal systems. The most interesting question is: where is the border between them? If I like my hand, is it part of me or of external world?
There is also interesting analogy with virus behavior. A virus seems to be interested in existing of his remote copies, with which it may have no any casual connections, because they will continue to replicate. (Altruistic genes do the same, if they exist after all). So egoistic behaviour here is altruistic to another copies of the virus.