There exist some maths problems that even ASI can’t solve, because they require more computation than fits in the universe. To prove this, consider the set of all programs that take in an arbitrary turing machine and return “halt” “no halt” or “unsure”. Rule out all the programs that are ever wrong. Rule out all the programs that require more computation than fits in the universe. Consider a program that take in a turing machine and applies all such programs to it. If any of them return “halt” then you have worked out that it halts in finite time. If any return “no halt” then you know it does not halt. As the halting problem can’t be solved, then the program must sometimes return unsure. That is there must exist instances of the halting problem that no program that fits in the universe can solve. (Assuming the universe contains a finite amount of computation)
These problems aren’t actually that important to the real world. They are abstract mathematical limitations that wouldn’t stop the AI from achieving a decisive strategic advantage. There are limits, but they aren’t very limiting.
The AI needs at least some data to deduce facts about the world. This is also not very limiting. Will it need to build huge pieces of physics equipment to work out how the universe works, or will it figure it out from the data we have already gathered? Could it figure out string theory from a copy of Kepler’s notes? We just don’t know. It depends if there are several different theories that would produce similar results.
Your example seems a bit weird to me, because the amount of computation a program requires depends on its input. There are some inputs (in fact all but finitely many of them) such that no program can read the input using all the computing power in the universe. So trivially there are instances of the halting problem that no program in the universe can solve (because such a program cannot even read the input).
Also, I don’t think the definition of “solve” is precise enough for the mathematical-flavor reasoning you seem to be trying to do here. An AI could flip a coin to answer all yes/no questions, does this count as “solving” the ones it gets right? If so it seems that there’s no yes/no problem that the AI couldn’t solve (if it got lucky).
Incidentally, I think there are plenty of simple math problems that an AI wouldn’t be able to solve. For example I think an AI probably wouldn’t be able to give an answer to the Collatz conjecture that’s any more satisfying than the one we already have (namely, that there is a heuristic argument that it is probably true, but a small chance that it might be wrong and no way to tell). Such problems might or might not be relevant to the AI’s strategic interests.
Finally, some math problems can’t be solved even with an infinite Turing machine!
There exist some maths problems that even ASI can’t solve, because they require more computation than fits in the universe. To prove this, consider the set of all programs that take in an arbitrary turing machine and return “halt” “no halt” or “unsure”. Rule out all the programs that are ever wrong. Rule out all the programs that require more computation than fits in the universe. Consider a program that take in a turing machine and applies all such programs to it. If any of them return “halt” then you have worked out that it halts in finite time. If any return “no halt” then you know it does not halt. As the halting problem can’t be solved, then the program must sometimes return unsure. That is there must exist instances of the halting problem that no program that fits in the universe can solve. (Assuming the universe contains a finite amount of computation)
These problems aren’t actually that important to the real world. They are abstract mathematical limitations that wouldn’t stop the AI from achieving a decisive strategic advantage. There are limits, but they aren’t very limiting.
The AI needs at least some data to deduce facts about the world. This is also not very limiting. Will it need to build huge pieces of physics equipment to work out how the universe works, or will it figure it out from the data we have already gathered? Could it figure out string theory from a copy of Kepler’s notes? We just don’t know. It depends if there are several different theories that would produce similar results.
Your example seems a bit weird to me, because the amount of computation a program requires depends on its input. There are some inputs (in fact all but finitely many of them) such that no program can read the input using all the computing power in the universe. So trivially there are instances of the halting problem that no program in the universe can solve (because such a program cannot even read the input).
Also, I don’t think the definition of “solve” is precise enough for the mathematical-flavor reasoning you seem to be trying to do here. An AI could flip a coin to answer all yes/no questions, does this count as “solving” the ones it gets right? If so it seems that there’s no yes/no problem that the AI couldn’t solve (if it got lucky).
Incidentally, I think there are plenty of simple math problems that an AI wouldn’t be able to solve. For example I think an AI probably wouldn’t be able to give an answer to the Collatz conjecture that’s any more satisfying than the one we already have (namely, that there is a heuristic argument that it is probably true, but a small chance that it might be wrong and no way to tell). Such problems might or might not be relevant to the AI’s strategic interests.
Finally, some math problems can’t be solved even with an infinite Turing machine!