Likewise, if we assume the agent’s behavior in Newcomb’s problem is also determined by a function—its *decision procedure—*then, if the predictor can model this function, it can accurately predict what the agent will do.
If you make your decision in a bounded amount of time (e.g. <1 million years) then the space of possible algorithms you’re using is restricted to ones which definitely output a decision within a bounded amount of time. So the Halting Problem doesn’t apply.
How does this not fail to the Halting Problem?
If you make your decision in a bounded amount of time (e.g. <1 million years) then the space of possible algorithms you’re using is restricted to ones which definitely output a decision within a bounded amount of time. So the Halting Problem doesn’t apply.