“”″
Game 1 Experimenter: “I’ve implemented the reward system in this little machine in front of you. The machine of course does not actually “know” which of L or R you are; I simply built one machine A which pays out 1000 exactly if the ‘I am L’ button is pressed, and then another identical-looking machine B which pays out 999 exactly if the ‘I am not L’ button is pressed, and then I placed the appropriate machine in front of you and the other one in front of your clone you can see over there. So, which button do you press?”
Fissioned dadadarren: “This is exactly like the hypothetical I was discussing online recently; implementing it using those machines hasn’t changed anything. So there is still no correct answer for the objective of maximizing my money; and I guess my plan will be to...”
Experimenter: “Let me interrupt you for a moment, I decided to add one more rule: I’m going to flip this coin, and if it comes up Heads I’m going to swap the machines in front of you and your other clone. flip; it’s Tails. Ah, I guess nothing changes; you can proceed with your original plan.”
Fissioned dadadarren: “Actually this changes everything—I now just watched that machine in front of me be chosen by true randomness from a set of two machines whose reward structures I know, so I will ignore the anthropic theming of the button labels and just run a standard EV calculation and determine that pressing the ‘I am L’ button is obviously the best choice.”
“”″
Is this how it would go—would watching a coin flip that otherwise does not affect the world change the clone’s calculation on what the correct action is or if a correct action even exists? Because while that’s not quite a logical contradiction, it seems bizarre enough to me that I think it probably indicates an important flaw in the theory.
Also am I modelling dadadarren correctly here:
“”″ Game 1 Experimenter: “I’ve implemented the reward system in this little machine in front of you. The machine of course does not actually “know” which of L or R you are; I simply built one machine A which pays out 1000 exactly if the ‘I am L’ button is pressed, and then another identical-looking machine B which pays out 999 exactly if the ‘I am not L’ button is pressed, and then I placed the appropriate machine in front of you and the other one in front of your clone you can see over there. So, which button do you press?”
Fissioned dadadarren: “This is exactly like the hypothetical I was discussing online recently; implementing it using those machines hasn’t changed anything. So there is still no correct answer for the objective of maximizing my money; and I guess my plan will be to...”
Experimenter: “Let me interrupt you for a moment, I decided to add one more rule: I’m going to flip this coin, and if it comes up Heads I’m going to swap the machines in front of you and your other clone. flip; it’s Tails. Ah, I guess nothing changes; you can proceed with your original plan.”
Fissioned dadadarren: “Actually this changes everything—I now just watched that machine in front of me be chosen by true randomness from a set of two machines whose reward structures I know, so I will ignore the anthropic theming of the button labels and just run a standard EV calculation and determine that pressing the ‘I am L’ button is obviously the best choice.” “”″
Is this how it would go—would watching a coin flip that otherwise does not affect the world change the clone’s calculation on what the correct action is or if a correct action even exists? Because while that’s not quite a logical contradiction, it seems bizarre enough to me that I think it probably indicates an important flaw in the theory.