In some sense this is a core idea of UDT: when coordinating with forks of yourself, you defer to your unique last common ancestor. When it’s not literally a fork of yourself, there’s more arbitrariness but you can still often find a way to use history to narrow down on coordination Schelling points (e.g. “what would Jesus do”)
I think this is wholly incorrect line of thinking. UDT operates on your logical ancestor, not literal.
Say, if you know enough science, you know that normal distribution is a maxentropy distribution for fixed mean and variance, and therefore, optimal prior distribution under certain set of assumptions. You can ask yourself question “let’s suppose that I haven’t seen this evidence, what would be my prior probability?” and get an answer and cooperate with your counterfactual versions which have seen other versions of evidence. But you can’t cooperate with your hypothetical version which doesn’t know what normal distribution is, because, if it doesn’t know about normal distribution, it can’t predict how you would behave and account for this in cooperation.
Sufficiently different versions of yourself are just logically uncorrelated with you and there is no game-theoretic reason to account for them.
Sufficiently different versions of yourself are just logically uncorrelated with you and there is no game-theoretic reason to account for them.
Seems odd to make an absolute statement here. More different versions of yourself are less and less correlated, but there’s still some correlation. And UDT should also be applicable to interactions with other people, who are typically different from you in a whole bunch of ways.
Absolute sense comes from absolute nature of taking actions, not absolute nature of logical correlation. I.e., in Prisoner’s Dilemma with payoffs (5,5)(10,1)(2,2) you should defect if your counterparty is capable to act conditional on your action in less than 75% of cases, which is quite high logical correlation, but expected value is higher if you defect.
I think this is wholly incorrect line of thinking. UDT operates on your logical ancestor, not literal.
Say, if you know enough science, you know that normal distribution is a maxentropy distribution for fixed mean and variance, and therefore, optimal prior distribution under certain set of assumptions. You can ask yourself question “let’s suppose that I haven’t seen this evidence, what would be my prior probability?” and get an answer and cooperate with your counterfactual versions which have seen other versions of evidence. But you can’t cooperate with your hypothetical version which doesn’t know what normal distribution is, because, if it doesn’t know about normal distribution, it can’t predict how you would behave and account for this in cooperation.
Sufficiently different versions of yourself are just logically uncorrelated with you and there is no game-theoretic reason to account for them.
Seems odd to make an absolute statement here. More different versions of yourself are less and less correlated, but there’s still some correlation. And UDT should also be applicable to interactions with other people, who are typically different from you in a whole bunch of ways.
Absolute sense comes from absolute nature of taking actions, not absolute nature of logical correlation. I.e., in Prisoner’s Dilemma with payoffs (5,5)(10,1)(2,2) you should defect if your counterparty is capable to act conditional on your action in less than 75% of cases, which is quite high logical correlation, but expected value is higher if you defect.