Yes, these complaints about option 1 are very real, and they bother me, which makes me unsure about my answer, and is a big part of why I created this post.
However the fact that factoring Bob’s U may not be easy or possible for Alice is not a good reason to say that Alice shouldn’t try to take that action that maximizes her expectation of Bob’s Utility. It makes her job harder, but that doesn’t mean she should try to optimize something else just because it is simpler.
I prefer 1 to 3, in spite of the fact that I think 3 actually is the more aesthetically pleasing answer.
If probability is caring, what does it mean for Alice to say that Bob’s caring is wrong? It seems to me that the intuitions in favor of option 1 are strongest in the case where some sort of “objective probability” exists and Alice has more information than Bob, not different priors. But in that case, options 1 and 3 are equivalent.
If you want to build a toy example where two agents have different but reasonable priors, maybe Robin Hanson’s pre-rationality is relevant? I’m not sure.
Note that your interpretation of altruism might make Alice go to war against Bob, even if she has no wishes of her own and cares only about being altruistic toward Bob. I guess the question is what are your desiderata for altruism?
If probability is caring, what does it mean for Alice to say that Bob’s caring is wrong?
In the exact same way that with subjective morality, relative to me, other peoples claims about morality are wrong. All I meant by that is that Alice doesn’t care in the probability sense more about the world just because Bob does because relative to Alice, Bob is simply caring about the things that are not very important.
Yes, these complaints about option 1 are very real, and they bother me, which makes me unsure about my answer, and is a big part of why I created this post.
However the fact that factoring Bob’s U may not be easy or possible for Alice is not a good reason to say that Alice shouldn’t try to take that action that maximizes her expectation of Bob’s Utility. It makes her job harder, but that doesn’t mean she should try to optimize something else just because it is simpler.
I prefer 1 to 3, in spite of the fact that I think 3 actually is the more aesthetically pleasing answer.
If probability is caring, what does it mean for Alice to say that Bob’s caring is wrong? It seems to me that the intuitions in favor of option 1 are strongest in the case where some sort of “objective probability” exists and Alice has more information than Bob, not different priors. But in that case, options 1 and 3 are equivalent.
If you want to build a toy example where two agents have different but reasonable priors, maybe Robin Hanson’s pre-rationality is relevant? I’m not sure.
Note that your interpretation of altruism might make Alice go to war against Bob, even if she has no wishes of her own and cares only about being altruistic toward Bob. I guess the question is what are your desiderata for altruism?
In the exact same way that with subjective morality, relative to me, other peoples claims about morality are wrong. All I meant by that is that Alice doesn’t care in the probability sense more about the world just because Bob does because relative to Alice, Bob is simply caring about the things that are not very important.