This seems like a lot of setup for a trivial answer. Or more likely, the complexity isn’t where you think, so you’ve given the wrong detail in the setup. (or I’ve missed the point completely, which is looking likely based on other comments from people much smarter than I).
There’s no part of the scenario that allows an update, so classical decision theory is sufficient to analyze this. Alice believes that there is a 2⁄3 chance of heads, and she prefers a world where her guess is correct. She guesses heads. Done.
This is actually option 0: maximize your own utility, recognizing that you get utility from your beliefs about other’s happiness(*).
You can add complexity that leads toward different choices, but unless there are iterated choices and memory-loss, or reverse-causality, or other decision-topology elements, it’s unlikely to be anything but option 0. Bob’s probability assessment is completely irrelevant if you can’t update on it (which you ruled out) and if Bob can never learn that you ignored his beliefs (so there’s no utility or happiness in giving up expected money for him to show him loyalty).
note: I say “your utility” and “others’ happiness” on purpose: the terms for them in your utility function are actually referring to your model of their utility’s effect on you, rather than their utility, which you cannot detect.
So you are saying that you have no cares whatsoever about whether or not other peoples preferences are fulfilled. Just about their happiness? Is this just because you cannot observe others preferences?
I think that if you believe that, then I agree with you. There is no value of this thought experiment to you. This is one of the things I tried to say at the beginning about preferences over things you cannot observe. I probably should have said specifically how this relates to altruism.
I think you’re confusing preferences about the world and preferences about an un-observable cause. As an altruist, Alice cares about Bob’s preferences whether to have a dollar or not. Bob has no way of having knowledge of (or a preference over) Alice’s prediction, and she knows it, so she’d be an idiot to project that onto her choice. If she thinks Bob may be right, then she updated her probability estimate, in contradiction of the story.
options 2 and 3 are twice as likely to lose than option 1. This is what it means for Alice to have the belief that there is a 2⁄3 chance of heads.
This seems like a lot of setup for a trivial answer. Or more likely, the complexity isn’t where you think, so you’ve given the wrong detail in the setup. (or I’ve missed the point completely, which is looking likely based on other comments from people much smarter than I).
There’s no part of the scenario that allows an update, so classical decision theory is sufficient to analyze this. Alice believes that there is a 2⁄3 chance of heads, and she prefers a world where her guess is correct. She guesses heads. Done.
This is actually option 0: maximize your own utility, recognizing that you get utility from your beliefs about other’s happiness(*).
You can add complexity that leads toward different choices, but unless there are iterated choices and memory-loss, or reverse-causality, or other decision-topology elements, it’s unlikely to be anything but option 0. Bob’s probability assessment is completely irrelevant if you can’t update on it (which you ruled out) and if Bob can never learn that you ignored his beliefs (so there’s no utility or happiness in giving up expected money for him to show him loyalty).
note: I say “your utility” and “others’ happiness” on purpose: the terms for them in your utility function are actually referring to your model of their utility’s effect on you, rather than their utility, which you cannot detect.
So you are saying that you have no cares whatsoever about whether or not other peoples preferences are fulfilled. Just about their happiness? Is this just because you cannot observe others preferences?
I think that if you believe that, then I agree with you. There is no value of this thought experiment to you. This is one of the things I tried to say at the beginning about preferences over things you cannot observe. I probably should have said specifically how this relates to altruism.
I think you’re confusing preferences about the world and preferences about an un-observable cause. As an altruist, Alice cares about Bob’s preferences whether to have a dollar or not. Bob has no way of having knowledge of (or a preference over) Alice’s prediction, and she knows it, so she’d be an idiot to project that onto her choice. If she thinks Bob may be right, then she updated her probability estimate, in contradiction of the story.
options 2 and 3 are twice as likely to lose than option 1. This is what it means for Alice to have the belief that there is a 2⁄3 chance of heads.