Let average_one_box_value = the average value received by people who chose one box. Let average_two_box_value = the average value received by people who chose two boxes.
If average_one_box_value > average_two_box_value, then pick one box, else pick two.
As a bonus, this eliminates the need to assume Omega being right is constant over both one boxers and two boxers.
[Edit—just plain wrong, see Misha’s comment below]
Minor quibble; It’s also not necessary to assume linear utility for dollars, just continuous.
That is, more money is always better. However, I’m pretty sure that’s true in your example as well.
It is definitely necessary to assume linear utility for dollars. For example: suppose your (marginal) utility function for money is U($0) = 0, U($1000) = 1, U($1000000) = 2 (where $1000 and $1000000 are the amounts of money that could be in the two boxes, respectively). Furthermore, suppose Omega always correctly predicts two-boxers, so they always get $1000. However, Omega is very pessimistic about one-boxers, so only 0.2% of them get $1000000, and the average one-box value ends up being $2000.
It is then not correct to say that you should one-box. For you, the expected utility of two-boxing is exactly 1, but the expected utility of one-boxing is 0.2% x 2 = 0.004, and so one-boxing is a really stupid strategy even though the expected monetary gain is twice as high.
Edit: of course, there’s an obvious fix: compute the average utility received by people, according to your utility function, and optimize over that.
That was my goal, the same but less verbose. and without needing to factor out probabilities that are later factored back in.
My question was unclear, let me try again;
(Why) is it necessary to go through all the work to arrive at a probability that Omega will predict you correctly?
[edit question: is there anyway to do strike-through text in markdown? Or embed html tags?]
If the goal is a simple analysis, why not this;
Let average_one_box_value = the average value received by people who chose one box.
Let average_two_box_value = the average value received by people who chose two boxes.
If average_one_box_value > average_two_box_value, then pick one box, else pick two.
As a bonus, this eliminates the need to assume Omega being right is constant over both one boxers and two boxers.
[Edit—just plain wrong, see Misha’s comment below] Minor quibble; It’s also not necessary to assume linear utility for dollars, just continuous. That is, more money is always better. However, I’m pretty sure that’s true in your example as well.
It is definitely necessary to assume linear utility for dollars. For example: suppose your (marginal) utility function for money is U($0) = 0, U($1000) = 1, U($1000000) = 2 (where $1000 and $1000000 are the amounts of money that could be in the two boxes, respectively). Furthermore, suppose Omega always correctly predicts two-boxers, so they always get $1000. However, Omega is very pessimistic about one-boxers, so only 0.2% of them get $1000000, and the average one-box value ends up being $2000.
It is then not correct to say that you should one-box. For you, the expected utility of two-boxing is exactly 1, but the expected utility of one-boxing is 0.2% x 2 = 0.004, and so one-boxing is a really stupid strategy even though the expected monetary gain is twice as high.
Edit: of course, there’s an obvious fix: compute the average utility received by people, according to your utility function, and optimize over that.
It’s been awhile, but isn’t that essentially what I did?
That was my goal, the same but less verbose. and without needing to factor out probabilities that are later factored back in.
My question was unclear, let me try again; (Why) is it necessary to go through all the work to arrive at a probability that Omega will predict you correctly?
[edit question: is there anyway to do strike-through text in markdown? Or embed html tags?]