Several issues, all of which I have highlighted elsewhere and I’ll highlight here too:
Omega cannot exist without being super-Turing, assuming agents are Turing-complete.
If you have an algorithm for producing an Omega, then I run said algorithm and run Omega on myself and do whatever Omega predicts I won’t do[1][2].
This is a contradiction, hence either:
A: If Omega cannot exist, there’s no sense spending time on this[3].
B: If Omega is super-Turing, then all bets are off.
C: If the agent is not Turing-complete, see below.
If agents aren’t Turing-complete, then CDT does not apply I don’t think.
(You may be able to come up with agents for specific problems, but you cannot come up with an agent that applies CDT to all encountered problems[4].)
“reasonably accurate predictor” hides some subtlety here:
Does this mean that “Omega-lite” predicts all agents correctly >50%of the time?
Or does this mean it can e.g. predict 80% of agents 80% of the time?
If the former, 1 still holds. “Omega-lite” cannot have a prediction accuracy of better than a coinflip for my “run Omega and do the opposite of whatever it says” counterexample without falling into options A-C.
If the latter, there’s no guarantee that “Omega-lite” has any accuracy greater than a coinflip for any specific agent, and there’s no paradox as a result.
(There are also a few other subtleties. Omega-lite cannot predict itself (and do computation on the result), or else you get the same contradiction with Omega-lite predicting itself and doing whatever it thinks it won’t do. Ditto, you end up with similar issues if >1 Omega-lite can exist and predict other Omega-lites.)
...I should really write this up and turn into a proper post at some point, as this isn’t the first post I’ve seen that ignores these issues.
I’m using first-person here to make the distinction a little clearer, as ‘itself’ is ambiguous as to if it refers to the agent or Omega. Alternatively: I submit an agent A that runs Omega on A and then agent A does whatever Omega said agent A wouldn’t do.
If you have an algorithm for producing an Omega, then I run said algorithm and run Omega on myself and do whatever Omega predicts I won’t do
So then my decision procedure simulates Omega, who simulates my decision procedure. “My decision procedure” is the same decision procedure in both cases, so I think it’s impossible for me to do what Omega predicts I won’t do. Or is this where Omega becomes super-Turing? My knowledge on that is limited.
So then my decision procedure simulates Omega, who simulates my decision procedure.
Yep. The problem is the unbounded recursion here, give or take. A reasons about B reasoning about A reasoning about B reasoning about A ad infinitum.
I think it’s impossible for me to do what Omega predicts I won’t do.
Hence, there’s a contradiction. Assuming the Oracle side is correct, it’s fairly straightforward to show that the agent is also correct… and that this leads to a contradiction. Hence, the Oracle side cannot be correct[1].
It’s easiest to see via analogy to the Halting problem[2].
OmegaHalting Problem Solver → a decidable algorithm that predicts what an arbitrary agent doesif an arbitrary Turing Machine halts
If you have a decidable algorithm for producing an Omega a Halting Problem Solver, then I run said algorithm and run said resulting OmegaHalting Problem Solver on myself and do whatever said OmegaHalting Problem Solver predicts I won’t do (take both boxes go into an infinite loop if and only ifsaid OmegaHalting Problem Solver predicts I won’t)
This is a contradiction, therefore an algorithm for producing an Omega a Halting Problem Solver cannot exist.
Or, in the slightly more straightforward ‘standard’ proof form:
OmegaHalting Problem Solver → a decidable algorithm that predicts what an arbitrary agent doesif an arbitrary Turing Machine halts
If you have an Omega a Halting Problem Solver, then I run said OmegaHalting Problem Solver on myself and do whatever said OmegaHalting Problem Solver predicts I won’t do (take both boxes go into an infinite loop if and only ifsaid OmegaHalting Problem Solver predicts I won’t)
This is a contradiction, therefore an Omega a Halting Problem Solver cannot exist.
Or is this where Omega becomes super-Turing?
This contradiction only applies if the agent can simulate Omega. (The premise requires that Omega can simulate the agent.)
One way of avoiding this contradiction is if the agent is not Turing-complete, and Omega can simulate the agent but not vice versa.
If the agent is Turing-complete, this implies that Omega must be Turing-complete in order for Omega to simulate the agent. By the Church-Turing thesis, if both are computable this in turn implies that Omega and the agent can simulate each other, and we lead to this contradiction. Which means that if the agent is Turing-complete, Omega must not be computable. So the other possibility is the agent is Turing-complete and that Omega isn’t computable (and is e.g. an Oracle Machine). This is where Omega becomes super-Turing.
(A third possibility is that Omega is allowed to be semidecidable… but in this case if I’m an agent that will result in Omega infinite-looping it shouldn’t have been able to ask the question in the first place.)
The standard proof starts with ‘assume you have a TM that solves the Halting Problem’, and directly shows that said TM is a contradiction. This proof starts with ‘assume you have an algorithm that produces a TM that solves the Halting Problem’, and shows that said algorithm is a contradiction.
Several issues, all of which I have highlighted elsewhere and I’ll highlight here too:
Omega cannot exist without being super-Turing, assuming agents are Turing-complete.
If you have an algorithm for producing an Omega, then I run said algorithm and run Omega on myself and do whatever Omega predicts I won’t do[1][2].
This is a contradiction, hence either:
A: If Omega cannot exist, there’s no sense spending time on this[3].
B: If Omega is super-Turing, then all bets are off.
C: If the agent is not Turing-complete, see below.
If agents aren’t Turing-complete, then CDT does not apply I don’t think.
(You may be able to come up with agents for specific problems, but you cannot come up with an agent that applies CDT to all encountered problems[4].)
“reasonably accurate predictor” hides some subtlety here:
Does this mean that “Omega-lite” predicts all agents correctly >50% of the time?
Or does this mean it can e.g. predict 80% of agents 80% of the time?
If the former, 1 still holds. “Omega-lite” cannot have a prediction accuracy of better than a coinflip for my “run Omega and do the opposite of whatever it says” counterexample without falling into options A-C.
If the latter, there’s no guarantee that “Omega-lite” has any accuracy greater than a coinflip for any specific agent, and there’s no paradox as a result.
(There are also a few other subtleties. Omega-lite cannot predict itself (and do computation on the result), or else you get the same contradiction with Omega-lite predicting itself and doing whatever it thinks it won’t do. Ditto, you end up with similar issues if >1 Omega-lite can exist and predict other Omega-lites.)
...I should really write this up and turn into a proper post at some point, as this isn’t the first post I’ve seen that ignores these issues.
See also https://www.lesswrong.com/posts/uS6vdQH8zpHyMAsxR/ordinary-and-unordinary-decision-theory?commentId=RhxRDhtSNSfmmFqe9
I’m using first-person here to make the distinction a little clearer, as ‘itself’ is ambiguous as to if it refers to the agent or Omega. Alternatively: I submit an agent A that runs Omega on A and then agent A does whatever Omega said agent A wouldn’t do.
Or, in the probabilistic case, I do X if and only if Omega predicts a <50% probability of me doing X.
...mostly. E.g. halting oracles don’t exist, but are sometimes useful as steppingstones for proofs.
Among other issues, parsing the question itself may be impossible...
Thanks for your comment!
So then my decision procedure simulates Omega, who simulates my decision procedure. “My decision procedure” is the same decision procedure in both cases, so I think it’s impossible for me to do what Omega predicts I won’t do. Or is this where Omega becomes super-Turing? My knowledge on that is limited.
Yep. The problem is the unbounded recursion here, give or take. A reasons about B reasoning about A reasoning about B reasoning about A ad infinitum.
Hence, there’s a contradiction. Assuming the Oracle side is correct, it’s fairly straightforward to show that the agent is also correct… and that this leads to a contradiction. Hence, the Oracle side cannot be correct[1].
It’s easiest to see via analogy to the Halting problem[2].
Or, in the slightly more straightforward ‘standard’ proof form:
This contradiction only applies if the agent can simulate Omega. (The premise requires that Omega can simulate the agent.)
One way of avoiding this contradiction is if the agent is not Turing-complete, and Omega can simulate the agent but not vice versa.
If the agent is Turing-complete, this implies that Omega must be Turing-complete in order for Omega to simulate the agent. By the Church-Turing thesis, if both are computable this in turn implies that Omega and the agent can simulate each other, and we lead to this contradiction. Which means that if the agent is Turing-complete, Omega must not be computable. So the other possibility is the agent is Turing-complete and that Omega isn’t computable (and is e.g. an Oracle Machine). This is where Omega becomes super-Turing.
(A third possibility is that Omega is allowed to be semidecidable… but in this case if I’m an agent that will result in Omega infinite-looping it shouldn’t have been able to ask the question in the first place.)
In particular, I suspect the failure mode of many Omegas would be to go into an infinite loop.
This is a slightly different proof than the ‘standard’ proof of the Halting Problem[3], but this proof also works.
The standard proof starts with ‘assume you have a TM that solves the Halting Problem’, and directly shows that said TM is a contradiction. This proof starts with ‘assume you have an algorithm that produces a TM that solves the Halting Problem’, and shows that said algorithm is a contradiction.