Your example seems a bit weird to me, because the amount of computation a program requires depends on its input. There are some inputs (in fact all but finitely many of them) such that no program can read the input using all the computing power in the universe. So trivially there are instances of the halting problem that no program in the universe can solve (because such a program cannot even read the input).
Also, I don’t think the definition of “solve” is precise enough for the mathematical-flavor reasoning you seem to be trying to do here. An AI could flip a coin to answer all yes/no questions, does this count as “solving” the ones it gets right? If so it seems that there’s no yes/no problem that the AI couldn’t solve (if it got lucky).
Incidentally, I think there are plenty of simple math problems that an AI wouldn’t be able to solve. For example I think an AI probably wouldn’t be able to give an answer to the Collatz conjecture that’s any more satisfying than the one we already have (namely, that there is a heuristic argument that it is probably true, but a small chance that it might be wrong and no way to tell). Such problems might or might not be relevant to the AI’s strategic interests.
Finally, some math problems can’t be solved even with an infinite Turing machine!
Your example seems a bit weird to me, because the amount of computation a program requires depends on its input. There are some inputs (in fact all but finitely many of them) such that no program can read the input using all the computing power in the universe. So trivially there are instances of the halting problem that no program in the universe can solve (because such a program cannot even read the input).
Also, I don’t think the definition of “solve” is precise enough for the mathematical-flavor reasoning you seem to be trying to do here. An AI could flip a coin to answer all yes/no questions, does this count as “solving” the ones it gets right? If so it seems that there’s no yes/no problem that the AI couldn’t solve (if it got lucky).
Incidentally, I think there are plenty of simple math problems that an AI wouldn’t be able to solve. For example I think an AI probably wouldn’t be able to give an answer to the Collatz conjecture that’s any more satisfying than the one we already have (namely, that there is a heuristic argument that it is probably true, but a small chance that it might be wrong and no way to tell). Such problems might or might not be relevant to the AI’s strategic interests.
Finally, some math problems can’t be solved even with an infinite Turing machine!