if there’s some set S of computer programs, and the algorithm A solves the halting problem of S, then A + a small amount of code predicts all sequences generated by programs in S.
Method: Go through S in some order. Skip over each program that doesn’t halt. Skip over each program when it’s falsified by the data. Stop when you reach a program that fits the data, and use its programs until its falsified, at which point you move on to the next program...
Humans are computable. There exists a set of programs that humans cannot predict and a (strongly related) set that we cannot tell if they halt. Cannot meaning constitutionally incapable.
Clearly, we should not expect an AI we build to be able to predict these programs.
What if our world is one of those programs? That sounds scary.
Not your question, but great intelligence is necessarily highly complex—according to:
“Is there an Elegant Universal Theory of Prediction?”—Shane Legg.
http://www.vetta.org/documents/IDSIA-12-06-1.pdf
Thanks. This looks helpful.
if there’s some set S of computer programs, and the algorithm A solves the halting problem of S, then A + a small amount of code predicts all sequences generated by programs in S.
Method: Go through S in some order. Skip over each program that doesn’t halt. Skip over each program when it’s falsified by the data. Stop when you reach a program that fits the data, and use its programs until its falsified, at which point you move on to the next program...
Humans are computable. There exists a set of programs that humans cannot predict and a (strongly related) set that we cannot tell if they halt. Cannot meaning constitutionally incapable.
Clearly, we should not expect an AI we build to be able to predict these programs.
What if our world is one of those programs? That sounds scary.