Although I find one-boxing difficult to do in that scenario, as a human, it is apparent that a reflectively consistent decision theory would one-box, as that is what it would have precommitted to do if it had the option (prior to its not yet determined existence) to precommit. No backwards arrows of causality are needed, just a particular type of consistency (updatelessness or timelessness).
Although I find one-boxing difficult to do in that scenario, as a human, it is apparent that a reflectively consistent decision theory would one-box, as that is what it would have precommitted to do if it had the option (prior to its not yet determined existence) to precommit. No backwards arrows of causality are needed, just a particular type of consistency (updatelessness or timelessness).