Should it? It appears to me that efforts toward saving the world, if successful, only raise the odds that the branch you personally experience will include a saved world.
Or from a different perspective your decision algorithm partially determines the optimization target for the updateless game-theoretical compromise that emerges around that algorithm.
That’s certainly a useful view of the ambiguity inherent in decision theory in MWI. Or it would be, if I had a local group to help me get a deep understanding of UDT—the Tampa chapter of the Bayesian Conspiracy has lain in abeyance since your visit.
Should it? It appears to me that efforts toward saving the world, if successful, only raise the odds that the branch you personally experience will include a saved world.
Or from a different perspective your decision algorithm partially determines the optimization target for the updateless game-theoretical compromise that emerges around that algorithm.
That’s certainly a useful view of the ambiguity inherent in decision theory in MWI. Or it would be, if I had a local group to help me get a deep understanding of UDT—the Tampa chapter of the Bayesian Conspiracy has lain in abeyance since your visit.