Isn’t this identical to the proof for why there’s no general algorithm for solving the Halting Problem?
The Halting Problem asks for an algorithm A(S, I) that when given the source code S and input I for another program will report whether S(I) halts (vs run forever).
There is a proof that says A does not exist. There is no general algorithm for determining whether an arbitrary program will halt. “General” and “arbitrary” are important keywords because it’s trivial to consider specific algorithms and specific programs and say, yes, we can determine that this specific program will halt via this specific algorithm.
That proof of the Halting Problem (for a general algorithm and arbitrary programs!) works by defining a pathological program S that inspects what the general algorithm A would predict and then does the opposite.
What you’re describing above seems almost word-for-word the same construction used for constructing the pathological program S, except the algorithm A for “will this program halt?” is replaced by the predictor “will this person one-box?”.
I’m not sure that this necessarily matters for the thought experiment. For example, perhaps we can pretend that the predictor works on all strategies except the pathological case described here, and other strategies isomorphic to it.
Isn’t this identical to the proof for why there’s no general algorithm for solving the Halting Problem?
The Halting Problem asks for an algorithm
A(S, I)
that when given the source codeS
and inputI
for another program will report whetherS(I)
halts (vs run forever).There is a proof that says
A
does not exist. There is no general algorithm for determining whether an arbitrary program will halt. “General” and “arbitrary” are important keywords because it’s trivial to consider specific algorithms and specific programs and say, yes, we can determine that this specific program will halt via this specific algorithm.That proof of the Halting Problem (for a general algorithm and arbitrary programs!) works by defining a pathological program
S
that inspects what the general algorithmA
would predict and then does the opposite.What you’re describing above seems almost word-for-word the same construction used for constructing the pathological program
S
, except the algorithmA
for “will this program halt?” is replaced by the predictor “will this person one-box?”.I’m not sure that this necessarily matters for the thought experiment. For example, perhaps we can pretend that the predictor works on all strategies except the pathological case described here, and other strategies isomorphic to it.