Ok, so as I understand timeless decision theory, one wants to honor precommitments that one would have made if the outcome actually depended on the answer regardless of whether or not the outcome actually depends on the answer or not. The reason for this seems to be that behaving as a timeless decision agent makes your behavior predictable to other timeless decision theoretical agents (including your future selves), and therefore big wins can be had all around for all, especially when trying to predict your own future behavior.
So, if you buy the idea that there are multiple universes, and multiple instantiations of this problem, and you somehow care about the results in these of these other universes, and your actions indicate probabalistically how other instantiations of your predicted self will act, then by all means, One Box on problem #1.
However, if you do NOT care about other universes, and believe this is in fact a single instantiation, and you are not totally freaked out by the idea of disobeying the desires of your just revealed upon you creator (or actually get some pleasure out of this idea), then please Two Box. You as you are in this universe will NOT unexist if you do so. You know that going into it. So, calculate the utility you gain from getting a million dollars this one time vs the utility you lose from being an imperfect timeless decision theoretical agent. Sure, there’s some loss, but at a high enough pay out, it becomes a worthy trade.
I think Newcomb’s problem would be more interesting if the 1st box contained 1⁄2 million and the 2nd box contained 1 million, and omega was only right, say 75% of the time… See how fast answers start changing. What if omega thought you were a dirty two-boxer and only put money in box b? Then you would be screwed if you one-boxed! Try telling your wife that you made the correct ‘timeless decision theoretical’ answer when you come home with nothing.
I think Newcomb’s problem would be more interesting if the 1st box contained 1⁄2 million and the 2nd box contained 1 million, and omega was only right, say 75% of the time… See how fast answers start changing. What if omega thought you were a dirty two-boxer and only put money in box b? Then you would be screwed if you one-boxed! Try telling your wife that you made the correct ‘timeless decision theoretical’ answer when you come home with nothing.
You can’t change the form of the problem like that and expect the same answer to apply! If, when you two-box, Omega has a 25% chance of misidentifying you as a one-boxer, and vice versa, then you can use that in a normal expected utility calculation.
If you one-box, you have a 75% chance of getting $1 million, 25% nothing; if you two-box, 75% $.5 million, 25% $1.5 million. With linear utility over money, one-boxing and two-boxing are equivalent (expected value: $750,000), and given even a slightly risk-averse dollars->utils mapping, two-boxing is the better deal. (I don’t think TDT disagrees with that reasoning...)
That’s kind of my point—it is a utility calculation, not some mystical er-problem. TDT-type problems occur all the time in real life, but they tend not to involve ‘perfect’ predictors, but rather other flawed agents. The decision to cooperate or not cooperate is thus dependent on the calculated utility of doing so.
Ok, so as I understand timeless decision theory, one wants to honor precommitments that one would have made if the outcome actually depended on the answer regardless of whether or not the outcome actually depends on the answer or not. The reason for this seems to be that behaving as a timeless decision agent makes your behavior predictable to other timeless decision theoretical agents (including your future selves), and therefore big wins can be had all around for all, especially when trying to predict your own future behavior.
So, if you buy the idea that there are multiple universes, and multiple instantiations of this problem, and you somehow care about the results in these of these other universes, and your actions indicate probabalistically how other instantiations of your predicted self will act, then by all means, One Box on problem #1.
However, if you do NOT care about other universes, and believe this is in fact a single instantiation, and you are not totally freaked out by the idea of disobeying the desires of your just revealed upon you creator (or actually get some pleasure out of this idea), then please Two Box. You as you are in this universe will NOT unexist if you do so. You know that going into it. So, calculate the utility you gain from getting a million dollars this one time vs the utility you lose from being an imperfect timeless decision theoretical agent. Sure, there’s some loss, but at a high enough pay out, it becomes a worthy trade.
I think Newcomb’s problem would be more interesting if the 1st box contained 1⁄2 million and the 2nd box contained 1 million, and omega was only right, say 75% of the time… See how fast answers start changing. What if omega thought you were a dirty two-boxer and only put money in box b? Then you would be screwed if you one-boxed! Try telling your wife that you made the correct ‘timeless decision theoretical’ answer when you come home with nothing.
You can’t change the form of the problem like that and expect the same answer to apply! If, when you two-box, Omega has a 25% chance of misidentifying you as a one-boxer, and vice versa, then you can use that in a normal expected utility calculation.
If you one-box, you have a 75% chance of getting $1 million, 25% nothing; if you two-box, 75% $.5 million, 25% $1.5 million. With linear utility over money, one-boxing and two-boxing are equivalent (expected value: $750,000), and given even a slightly risk-averse dollars->utils mapping, two-boxing is the better deal. (I don’t think TDT disagrees with that reasoning...)
That’s kind of my point—it is a utility calculation, not some mystical er-problem. TDT-type problems occur all the time in real life, but they tend not to involve ‘perfect’ predictors, but rather other flawed agents. The decision to cooperate or not cooperate is thus dependent on the calculated utility of doing so.
Right, I was mainly responding to the implication that TDT would be to blame for that wrong answer.