Objection 3. In order for the CDT agent to one-box, it needs a special “non-self-centered” utility function, which when inside the simulation would value things outside.
To expand on this, it’s not inconsistent to have an agent that only cares about how many delicious gummi bears xe personally gets to eat, and definitely not about how many gummi bears any other copy of xerself gets to eat.
Then, even given 50-50 anthropic uncertainty about whether xe is the real-world self or the simulated self, xe will two-box:
50% chance of being in a world where all choices are pointless (I’m presuming that Omega will stop the simulation once xe makes xer choice), since the only consequence is whether someone else’s opaque box is full of gummi bears or not.
50% chance of being in the real world, in which case the opaque box contains X gummies (where X=0 or 1000000), so two-boxing earns 1000+X gummies while one-boxing only earns X.
So two-boxing is still worth 500 expected gummies overall, in the CDT formulation.
Of course, a variant on quantum suicide could still lead to one-boxing here, but there are consistent theories of anthropics that reject quantum suicide. Also, Omega could pick a different color for the opaque box in the real and simulated worlds (without the agent knowing which is which), so that the real future agent can’t be the successor of the simulated one...
Omega could pick a different color for the opaque box in the real and simulated worlds (without the agent knowing which is which), so that the real future agent can’t be the successor of the simulated one...
Oh. I missed that. This would also break the similarity to Absentminded driver problem...
But no, this doesn’t work, because Omega is known to always guess correctly, and there exist agents that one-box if the opaque box is red and two-box if it’s blue. So, the simulation must be perfect.
To expand on this, it’s not inconsistent to have an agent that only cares about how many delicious gummi bears xe personally gets to eat, and definitely not about how many gummi bears any other copy of xerself gets to eat.
Then, even given 50-50 anthropic uncertainty about whether xe is the real-world self or the simulated self, xe will two-box:
50% chance of being in a world where all choices are pointless (I’m presuming that Omega will stop the simulation once xe makes xer choice), since the only consequence is whether someone else’s opaque box is full of gummi bears or not.
50% chance of being in the real world, in which case the opaque box contains X gummies (where X=0 or 1000000), so two-boxing earns 1000+X gummies while one-boxing only earns X.
So two-boxing is still worth 500 expected gummies overall, in the CDT formulation.
Of course, a variant on quantum suicide could still lead to one-boxing here, but there are consistent theories of anthropics that reject quantum suicide. Also, Omega could pick a different color for the opaque box in the real and simulated worlds (without the agent knowing which is which), so that the real future agent can’t be the successor of the simulated one...
Oh. I missed that. This would also break the similarity to Absentminded driver problem...
But no, this doesn’t work, because Omega is known to always guess correctly, and there exist agents that one-box if the opaque box is red and two-box if it’s blue. So, the simulation must be perfect.
It’s still an almost-Newcomb problem that sane decision theories should pass.