I’m pretty sure Eliezer would one-box against Omega any time box B contained more money than box A. Against you or me, I’m pretty sure he would one box with the original 1000000:1000 problem (that’s kind of the obvious answer), but not sure if it were a 1200:1000 problem.
A further thing to note: If Eliezer models other people as either significantly overestimating or significantly understimating the probability he’ll one-box against them, both possibilities increase the probability he’ll actually two-box against them.
So it all depends on Eliezer’s model of other people’s model of Eliezer’s model of their model. Insert The Princess Bride reference. :-)
Or at least your model of Eliezer models other people modeling his model of them. He may go one level deeper and model other people’s model of his model of other people’s model of his model of them, or (more likely) not bother and just use general heuristics. Because modeling breaks down around one or two layers of recursion most of the time.
Now we are getting somewhere good! Certainty rarely shows up in predictions, especially about the future. Your decision theory may be timeless, but don’t confuse the map with the territory, the universe may not be timeless.
Unless you are assigning a numerical, non-zero, non-unity probability to Omega’s accuracy, you do not know when to one-box and when to two-box with arbitrary amounts of money in the boxes. And unless your FAI is a chump, it is considering LOTS of details in estimating Omega’s accuracy, no doubt including considerations of how much the FAI’s own finiteness of knowledge and computation fails to constrain the possibility that Omega is tricking it.
A NASA engineer had been telling Feynman that the liquid rocket motor had a zero probability of exploding on takeoff. Feynman convinced him that this was not an engineering answer. The NASA engineer then smiled and told Feynman the probability of the liquid rocket motor exploding on take off was “epsilon.” Feynman replied (and I paraphrase from memory) “Good! Now we are getting somewhere! Now all you have to tell me is what your estimate for the value of epsilon is, and how you arrived at that number.”
Any calculation of your estimate of Omega’s responsibility which does not include gigantic terms for the evaluation of the probability that Omega is tricking you in a way you haven’t figure out yet is likely to fail. I base that on the prevalence and importance of con games in the best natural experiment on intelligence we have: humans.
I’m pretty sure Eliezer would one-box against Omega any time box B contained more money than box A. Against you or me, I’m pretty sure he would one box with the original 1000000:1000 problem (that’s kind of the obvious answer), but not sure if it were a 1200:1000 problem.
A further thing to note: If Eliezer models other people as either significantly overestimating or significantly understimating the probability he’ll one-box against them, both possibilities increase the probability he’ll actually two-box against them.
So it all depends on Eliezer’s model of other people’s model of Eliezer’s model of their model. Insert The Princess Bride reference. :-)
Or at least your model of Eliezer models other people modeling his model of them. He may go one level deeper and model other people’s model of his model of other people’s model of his model of them, or (more likely) not bother and just use general heuristics. Because modeling breaks down around one or two layers of recursion most of the time.
Now we are getting somewhere good! Certainty rarely shows up in predictions, especially about the future. Your decision theory may be timeless, but don’t confuse the map with the territory, the universe may not be timeless.
Unless you are assigning a numerical, non-zero, non-unity probability to Omega’s accuracy, you do not know when to one-box and when to two-box with arbitrary amounts of money in the boxes. And unless your FAI is a chump, it is considering LOTS of details in estimating Omega’s accuracy, no doubt including considerations of how much the FAI’s own finiteness of knowledge and computation fails to constrain the possibility that Omega is tricking it.
A NASA engineer had been telling Feynman that the liquid rocket motor had a zero probability of exploding on takeoff. Feynman convinced him that this was not an engineering answer. The NASA engineer then smiled and told Feynman the probability of the liquid rocket motor exploding on take off was “epsilon.” Feynman replied (and I paraphrase from memory) “Good! Now we are getting somewhere! Now all you have to tell me is what your estimate for the value of epsilon is, and how you arrived at that number.”
Any calculation of your estimate of Omega’s responsibility which does not include gigantic terms for the evaluation of the probability that Omega is tricking you in a way you haven’t figure out yet is likely to fail. I base that on the prevalence and importance of con games in the best natural experiment on intelligence we have: humans.