It seems like this “caring” could be analyzed a lot more, though. For example, suppose I were an altruist who continued to care about the “heads” worlds even after I learned that I’m not in them. Wouldn’t I still assign probability ~1 to the proposition that the coin came up tails in my own world? What does that probability assignment of ~1 mean in that case?
I suppose the idea is that a probability captures not only how much I care about a world, but also how much I think that I can influence that world by acting on my values.
Thanks. That makes it a lot clearer.
It seems like this “caring” could be analyzed a lot more, though. For example, suppose I were an altruist who continued to care about the “heads” worlds even after I learned that I’m not in them. Wouldn’t I still assign probability ~1 to the proposition that the coin came up tails in my own world? What does that probability assignment of ~1 mean in that case?
I suppose the idea is that a probability captures not only how much I care about a world, but also how much I think that I can influence that world by acting on my values.
See http://lesswrong.com/lw/15m/towards_a_new_decision_theory/ for more details. Many of my later posts can be considered explanations/justifications for the “design choices” I made in that post.