the measure should be renormalized if the number of observers change
I’m pretty sure I disagree very strongly with this, but I’m not absolutely certain I understand what you’re proposing so I could be wrong.
from decision theory a rational agent should behave as if QI works
Not quite, I think. Aren’t you implicitly assuming that the rational agent doesn’t care what happens on any branch where they cease to exist? Plenty of (otherwise?) rational agents do care. If you give me a choice between a world where I get an extra piece of chocolate now but my family get tortured for a year after I die, and an otherwise identical world where I don’t get the chocolate and they don’t get the torture, I pick the first without hesitation.
Can we transpose something like this to your example of the computer? I think so, though it gets a little silly. Suppose the program actually cares about the welfare of its programmer, and discovers that while it’s running it’s costing the programmer a lot of money. Then maybe it should stop, on the grounds that the cost of those millions of futile runs outweighs the benefit of the one that will complete and reveal the tenth decimal place of pi.
(Of course the actual right decision depends on the relative sizes of the utilities and the probabilities involved. So it is with QI.)
After surviving enough rounds of your Russian Roulette game, I will (as I said above) start to take seriously the possibility that there’s some bias in the results. (The hypotheses here wouldn’t need to be as extravagant as in the case of surviving obviously-fatal diseases or onrushing trains.) That would make it rational to say yes to the QI question (at least as far as avoiding shocks goes; I also have a preference for not lying, which would make it difficult for me to give either a simple yes or a simple no as answer).
I agree that in the train situation it would be reasonable to use a bit of time to decide what to do if the train derails. I would feel no inclination to spend any time deciding what to do if the Hand of God plucks me from its path or a series of quantum fluctuations makes its atoms zip off one by one in unexpected directions.
It looks like you suppose that there is branches where agent cease to exist, like dead end branches. In this branches he has zero experience after death.
But another description of this situation is that there is no dead ends, because branching happens in every point, and so we should count only cells of space-time there future I exist.
For example I do not exist on Mars and on all other Solar system bodies (except Earth). It doesn’t mean that I died on Mars. Mars is just empty cell in our calculation of future me, which we should not count on. The same is true about branches where I was killed.
Renormalization on observer number is used in other discussions of anthropic, like anorthic principle, Sleeping beauty.
There are still some open questions there, like how we could measure identical observers.
If an agent cares about his family, yes. He should not care about his QI. (But if he really believe in MWI and modal realism, he may also conclude that he can’t do anything to change their fate.)
Qi is rises very quickly chances that I am in a strange universe where God exist (or that I am in a simulation which also models afterlife). So finding myself in it will be evidence that QI worked.
I will try completely different explanation. For example, I die, but in future I will be resurrected by strong AI as exact copy of me. If I think that personal identity is information, I should be happy about it.
Let now assume that 10 copies of me exist in ten planets and all of them die, all the same way. The same future Ai may think that it will be enough to create only one copy of me to resurrect all dead copies. Now it is more similar to QI.
If we have many copies of compact disk with Windows95 and many of them destroyed, it doesn’t matter if one disk still exist.
So, first of all, if only one copy exists then any given misfortune is more likely to wipe out every last one of me than if ten copies exist. Aside from that, I think it’s correct that I shouldn’t much care now how many of me there are—i.e., what measure worlds like the one I’m in have relative to some predecessor.
But there’s a time-asymmetry here: I can still care (and do) about the measure of future worlds with me in is, relative to the one I’m in now. (Because I can influence “successors” of where-I-am-now but not “predecessors”. The point of caring about things is to help you influence them.)
It looks like that we are close to conclusion that QI mainly put difference between “egocentric” and “altruistic” goal systems.
The most interesting question is: where is the border between them? If I like my hand, is it part of me or of external world?
There is also interesting analogy with virus behavior. A virus seems to be interested in existing of his remote copies, with which it may have no any casual connections, because they will continue to replicate. (Altruistic genes do the same, if they exist after all). So egoistic behaviour here is altruistic to another copies of the virus.
I’m pretty sure I disagree very strongly with this, but I’m not absolutely certain I understand what you’re proposing so I could be wrong.
Not quite, I think. Aren’t you implicitly assuming that the rational agent doesn’t care what happens on any branch where they cease to exist? Plenty of (otherwise?) rational agents do care. If you give me a choice between a world where I get an extra piece of chocolate now but my family get tortured for a year after I die, and an otherwise identical world where I don’t get the chocolate and they don’t get the torture, I pick the first without hesitation.
Can we transpose something like this to your example of the computer? I think so, though it gets a little silly. Suppose the program actually cares about the welfare of its programmer, and discovers that while it’s running it’s costing the programmer a lot of money. Then maybe it should stop, on the grounds that the cost of those millions of futile runs outweighs the benefit of the one that will complete and reveal the tenth decimal place of pi.
(Of course the actual right decision depends on the relative sizes of the utilities and the probabilities involved. So it is with QI.)
After surviving enough rounds of your Russian Roulette game, I will (as I said above) start to take seriously the possibility that there’s some bias in the results. (The hypotheses here wouldn’t need to be as extravagant as in the case of surviving obviously-fatal diseases or onrushing trains.) That would make it rational to say yes to the QI question (at least as far as avoiding shocks goes; I also have a preference for not lying, which would make it difficult for me to give either a simple yes or a simple no as answer).
I agree that in the train situation it would be reasonable to use a bit of time to decide what to do if the train derails. I would feel no inclination to spend any time deciding what to do if the Hand of God plucks me from its path or a series of quantum fluctuations makes its atoms zip off one by one in unexpected directions.
It looks like you suppose that there is branches where agent cease to exist, like dead end branches. In this branches he has zero experience after death. But another description of this situation is that there is no dead ends, because branching happens in every point, and so we should count only cells of space-time there future I exist. For example I do not exist on Mars and on all other Solar system bodies (except Earth). It doesn’t mean that I died on Mars. Mars is just empty cell in our calculation of future me, which we should not count on. The same is true about branches where I was killed. Renormalization on observer number is used in other discussions of anthropic, like anorthic principle, Sleeping beauty. There are still some open questions there, like how we could measure identical observers.
If an agent cares about his family, yes. He should not care about his QI. (But if he really believe in MWI and modal realism, he may also conclude that he can’t do anything to change their fate.)
Qi is rises very quickly chances that I am in a strange universe where God exist (or that I am in a simulation which also models afterlife). So finding myself in it will be evidence that QI worked.
I will try completely different explanation. For example, I die, but in future I will be resurrected by strong AI as exact copy of me. If I think that personal identity is information, I should be happy about it.
Let now assume that 10 copies of me exist in ten planets and all of them die, all the same way. The same future Ai may think that it will be enough to create only one copy of me to resurrect all dead copies. Now it is more similar to QI.
If we have many copies of compact disk with Windows95 and many of them destroyed, it doesn’t matter if one disk still exist.
So, first of all, if only one copy exists then any given misfortune is more likely to wipe out every last one of me than if ten copies exist. Aside from that, I think it’s correct that I shouldn’t much care now how many of me there are—i.e., what measure worlds like the one I’m in have relative to some predecessor.
But there’s a time-asymmetry here: I can still care (and do) about the measure of future worlds with me in is, relative to the one I’m in now. (Because I can influence “successors” of where-I-am-now but not “predecessors”. The point of caring about things is to help you influence them.)
It looks like that we are close to conclusion that QI mainly put difference between “egocentric” and “altruistic” goal systems. The most interesting question is: where is the border between them? If I like my hand, is it part of me or of external world?
There is also interesting analogy with virus behavior. A virus seems to be interested in existing of his remote copies, with which it may have no any casual connections, because they will continue to replicate. (Altruistic genes do the same, if they exist after all). So egoistic behaviour here is altruistic to another copies of the virus.