I’ve been trying to work on this problem based on my admittely poor understanding of Updateless Decision Theory and I think I’ve come to the conclusion that, while you should one-box in Newcomb’s problem and in Transparent Newcomb’s problem, you should two-box when dealing with Prometheus, ignore Azathoth, and ignore the desires of evil parents.
When you’re faced with a decision, you find all copies of you in the entire “multiverse” that are faced with the same decision (“information set”), and choose the decision that logically implies the maximum sum of resulting utilities weghted by universe-weight......
.......For example, Counterfactual Mugging: by assumption, your decision logically affects both heads-universe and tails-universe, which (also by assumption) have equal weight, so by agreeing to pay you win more cookies overall.
I start by taking into account that there are some universes where I was created by Prometheus/Azathoth/evil parents and some where I was not. I then try to make the decision that will increase the utility of all my copies in all possible universes where a version of me is faced with this decision. If I two-box then all the existing copies of me faced with the same decision will also two-box, and get $200. The nonexistant mes in the universes where I was not created will keep right on not existing. If I one-box all the copies of me will get $100. Again, the nonexistant mes in the universes where I was not created will keep right on not existing, their nonexistant utility unchanged. So all my copies will get more utility if I two-box, ignore Azathoth, and tell my evil parents to do something anatomically improbable. The nonexistant mes cannot be affected.
The key is that you’re supposed to consider the utility of “all copies of you in the entire “multiverse” that are faced with the same decision,” and copies that were never created are obviously not faced with the same decision. This differs from the counterfactual mugging because there you exist in the heads and tails universe, so you have to take the utility of both copies into account. I believe that it differs from Newcomb’s problem and Transparent Newcomb’s problem for the same reason.
So it looks like, if I understand UDT correctly, in newcomblike-problems where you never having existed is part of the problem, one-boxing is not neccessarily rational. (An aside: I should also mention that I am assuming whatever method of prediction Prometheus used did not result in the creation of a morally signtificant copy of you in his head. That would be a whole other ballgame, and I think the spirit of the original post was that Prometheus’ prediction method did not do this)
Did I get this right, or is my understanding of UDT wrong? I’m not very certain of this at all, and would like it if someone with a stronger understanding of UDT could confirm or disconfirm it.
UPDATE: I think that a possible flaw in my reasoning is that I have misunderstood UDT to mean “make all your copies make whatever decision maximizes their utility in all possible situations,” when what it really means is more like “make your decision as if the Omega/Prometheus in those other universes is watching your universe and basing your decision on your behavior there, rather than the behavior of the you in their native universe; and act to maximize utility across all possible universes.” I think my previous formulation implies two-boxing in Newcomb’s, which seems wrong. With my second formulation it might indeed be better to one-box in the Prometheus problem because in some other universe Prometheus is “watching (i.e. simulating) the you of your universe and is going to decide to create you based on what you do.
I’m not sure my second formulation is quite right either. It still seems to me that these nonexistance problems are qualitatively different from other Newcomblike problems, it seems that the fact that in some universes I don’t exist changes the nature of the problem in some way so that one-boxing is no longer rational, maybe because in those universes I’m not part of the same “information set.”
That being said, even if UDT recommends one-boxing, there are still several strong objections to the original post’s conclusions, and AnasKateris’ disturbing “evil parent” variant:
Azathoth is not “watching your universe,” it’s not simulating anything, so it is not analogous to Prometheus
I’m not sure not updating the fact that you exist is logically coherent.
In any “evil parent” situation if the demands they make are sufficiently horrible it is better to not exist than to obey them.
These discussions focus primarily on what is individually rational, not what is collectively rational. It is collectively rational to two-box in the “evil parent” variant to deter evil parents from actually trying this, even if it isn’t individually rational. Similarly, the collectively rational thing to do in the Prometheus problem is probably to ask Omega if it can help you track down Prometheus, chain him to a rock, and make a giant eagle eat his constantly-regenerating liver to make sure he stops pulling crap like this.
If UDT makes you “lose” in the Prometheus problem and the “evil parent” variant maybe it still needs some work.
I’ve been trying to work on this problem based on my admittely poor understanding of Updateless Decision Theory and I think I’ve come to the conclusion that, while you should one-box in Newcomb’s problem and in Transparent Newcomb’s problem, you should two-box when dealing with Prometheus, ignore Azathoth, and ignore the desires of evil parents.
Why? My reasoning is based on these lines from cousin_it’s explanation of UDT:
I start by taking into account that there are some universes where I was created by Prometheus/Azathoth/evil parents and some where I was not. I then try to make the decision that will increase the utility of all my copies in all possible universes where a version of me is faced with this decision. If I two-box then all the existing copies of me faced with the same decision will also two-box, and get $200. The nonexistant mes in the universes where I was not created will keep right on not existing. If I one-box all the copies of me will get $100. Again, the nonexistant mes in the universes where I was not created will keep right on not existing, their nonexistant utility unchanged. So all my copies will get more utility if I two-box, ignore Azathoth, and tell my evil parents to do something anatomically improbable. The nonexistant mes cannot be affected.
The key is that you’re supposed to consider the utility of “all copies of you in the entire “multiverse” that are faced with the same decision,” and copies that were never created are obviously not faced with the same decision. This differs from the counterfactual mugging because there you exist in the heads and tails universe, so you have to take the utility of both copies into account. I believe that it differs from Newcomb’s problem and Transparent Newcomb’s problem for the same reason.
So it looks like, if I understand UDT correctly, in newcomblike-problems where you never having existed is part of the problem, one-boxing is not neccessarily rational. (An aside: I should also mention that I am assuming whatever method of prediction Prometheus used did not result in the creation of a morally signtificant copy of you in his head. That would be a whole other ballgame, and I think the spirit of the original post was that Prometheus’ prediction method did not do this)
Did I get this right, or is my understanding of UDT wrong? I’m not very certain of this at all, and would like it if someone with a stronger understanding of UDT could confirm or disconfirm it.
UPDATE: I think that a possible flaw in my reasoning is that I have misunderstood UDT to mean “make all your copies make whatever decision maximizes their utility in all possible situations,” when what it really means is more like “make your decision as if the Omega/Prometheus in those other universes is watching your universe and basing your decision on your behavior there, rather than the behavior of the you in their native universe; and act to maximize utility across all possible universes.” I think my previous formulation implies two-boxing in Newcomb’s, which seems wrong. With my second formulation it might indeed be better to one-box in the Prometheus problem because in some other universe Prometheus is “watching (i.e. simulating) the you of your universe and is going to decide to create you based on what you do.
I’m not sure my second formulation is quite right either. It still seems to me that these nonexistance problems are qualitatively different from other Newcomblike problems, it seems that the fact that in some universes I don’t exist changes the nature of the problem in some way so that one-boxing is no longer rational, maybe because in those universes I’m not part of the same “information set.”
That being said, even if UDT recommends one-boxing, there are still several strong objections to the original post’s conclusions, and AnasKateris’ disturbing “evil parent” variant:
Azathoth is not “watching your universe,” it’s not simulating anything, so it is not analogous to Prometheus
I’m not sure not updating the fact that you exist is logically coherent.
In any “evil parent” situation if the demands they make are sufficiently horrible it is better to not exist than to obey them.
These discussions focus primarily on what is individually rational, not what is collectively rational. It is collectively rational to two-box in the “evil parent” variant to deter evil parents from actually trying this, even if it isn’t individually rational. Similarly, the collectively rational thing to do in the Prometheus problem is probably to ask Omega if it can help you track down Prometheus, chain him to a rock, and make a giant eagle eat his constantly-regenerating liver to make sure he stops pulling crap like this.
If UDT makes you “lose” in the Prometheus problem and the “evil parent” variant maybe it still needs some work.