Evidence to support your idea- whenever I make a choice, in another branch, ‘I’ made a the other decision, so if I cared equally about all future versions of myself, the I’d have no reason to choose one option over another.
If correct, this shows I don’t care equally about currently parallel worlds, but not that I don’t care equally about future sub-branches from this one.
Whenever I make a choice, there are branches that made another choice. But not all branches are equal. The closer my decision algorithm is to deterministic (on a macroscopic scale), the more asymmetric the distribution of measure among decision outcomes. (And the cases where my decision isn’t close to deterministic are precisely the ones where I could just as easily have chosen the other way—where I don’t have any reason to pick one choice.)
Thus the thought experiment doesn’t show that I don’t care about all my branches, current and future, simply proportional to their measure.
Evidence to support your idea- whenever I make a choice, in another branch, ‘I’ made a the other decision, so if I cared equally about all future versions of myself, the I’d have no reason to choose one option over another.
If correct, this shows I don’t care equally about currently parallel worlds, but not that I don’t care equally about future sub-branches from this one.
Whenever I make a choice, there are branches that made another choice. But not all branches are equal. The closer my decision algorithm is to deterministic (on a macroscopic scale), the more asymmetric the distribution of measure among decision outcomes. (And the cases where my decision isn’t close to deterministic are precisely the ones where I could just as easily have chosen the other way—where I don’t have any reason to pick one choice.)
Thus the thought experiment doesn’t show that I don’t care about all my branches, current and future, simply proportional to their measure.