Suppose Omega plays the following game (the “Probability Game”) with me: You will tell me a number X representing the probability of A. If A turns out to be true, I will increase your utility by ln(X); otherwise, I will increase your utility by ln(1-X). It’s well-known that the way one maximizes one’s expected utility is by giving their actual expected probability of X.
Presumably, decision mechanisms should be consistent under reflection. Even if not, if I somehow know that Omega’s going to split me into 1,000,000,001 copies and do this, I want to modify my decision mechanism to do what I think is best.
Suppose I care about the entire group of 1,000,000,000 me’s who go into one color of room precisely as much as I care about the single me who goes into the other color. (Perhaps I’m extending the idea that two copies of one person should not be more deserving than a single copy of the person.) In order to maximize the average utility here, I should have everyone declare a 50% probability of the best answer, resulting in an average utility of about −0.69. If I had everyone declare a 1,000,000,000-in-1,000,000,001 probability, the average utility would be about −10.
Suppose, on the other hand, that I care about each individual person equally. If I had everyone declare a 50% probability, the average utility would still be −0.69, but if I had everyone declare a 1,000,000,000-in-1,000,000,001 probability, the average utility would go all the way up to −0.000000022.
One’s answer to the Probability Game is one’s probability estimate. The consistent-under-reflection answer to the Probability Game depends on one’s values. Therefore, one’s probability estimate depends on one’s values. It’s counterintuitive, but I don’t think I can argue against it.
Now, here’s, perhaps, a refutation. Suppose I know that some time in the future, I’m going to be turned into my evil twin, Dr. Dingo, and Omega is going to play the Probability Game with me on the statement “The sky is blue”. I hate my evil twin so much that I consider my utility to have his utility subtracted from it. Therefore, I modify myself to say that the probability that the sky is blue is 0, thereby resulting a utility for him of negative infinity, and a utility for me of infinity. Through the same mechanism—using an interpretation function to determine my utility given the utilities of future copies of me—I apparently make the probability that the sky is blue be 0. This doesn’t seem right.
Perhaps we could require that interpretation functions be monotonically related to the utilities they’re interpreting, so that an increase in a future me’s utility can’t decrease my current me’s utility. I don’t know if that would work.
Suppose Omega plays the following game (the “Probability Game”) with me: You will tell me a number X representing the probability of A. If A turns out to be true, I will increase your utility by ln(X); otherwise, I will increase your utility by ln(1-X). It’s well-known that the way one maximizes one’s expected utility is by giving their actual expected probability of X.
Presumably, decision mechanisms should be consistent under reflection. Even if not, if I somehow know that Omega’s going to split me into 1,000,000,001 copies and do this, I want to modify my decision mechanism to do what I think is best.
Suppose I care about the entire group of 1,000,000,000 me’s who go into one color of room precisely as much as I care about the single me who goes into the other color. (Perhaps I’m extending the idea that two copies of one person should not be more deserving than a single copy of the person.) In order to maximize the average utility here, I should have everyone declare a 50% probability of the best answer, resulting in an average utility of about −0.69. If I had everyone declare a 1,000,000,000-in-1,000,000,001 probability, the average utility would be about −10.
Suppose, on the other hand, that I care about each individual person equally. If I had everyone declare a 50% probability, the average utility would still be −0.69, but if I had everyone declare a 1,000,000,000-in-1,000,000,001 probability, the average utility would go all the way up to −0.000000022.
One’s answer to the Probability Game is one’s probability estimate. The consistent-under-reflection answer to the Probability Game depends on one’s values. Therefore, one’s probability estimate depends on one’s values. It’s counterintuitive, but I don’t think I can argue against it.
Now, here’s, perhaps, a refutation. Suppose I know that some time in the future, I’m going to be turned into my evil twin, Dr. Dingo, and Omega is going to play the Probability Game with me on the statement “The sky is blue”. I hate my evil twin so much that I consider my utility to have his utility subtracted from it. Therefore, I modify myself to say that the probability that the sky is blue is 0, thereby resulting a utility for him of negative infinity, and a utility for me of infinity. Through the same mechanism—using an interpretation function to determine my utility given the utilities of future copies of me—I apparently make the probability that the sky is blue be 0. This doesn’t seem right.
Perhaps we could require that interpretation functions be monotonically related to the utilities they’re interpreting, so that an increase in a future me’s utility can’t decrease my current me’s utility. I don’t know if that would work.