The number of particles in the universe may limit AI from figuring out BB(6), but it limits the humans, too.
Whether we can shut down the AI by telling it “hey, you can’t calculate BB(6), why don’t you kill yourself?” probably depends on the specific architecture, but seems to me that the AI probably just won’t care.
I didn’t mean it to be so simplistic. I am just considering that if there is a known limitation of AI, no matter how powerful it is, that could be used as the basis of a system an AI could not circumvent. For example, if there was a shutdown system where the only way to disable it would require solving the halting problem.
If you knew how to build such shutdown system, you could probably also build one that cannot be disabled at all (e.g. would require solving a literally impossible problem, like proving that 1 = 0).
The number of particles in the universe may limit AI from figuring out BB(6), but it limits the humans, too.
Whether we can shut down the AI by telling it “hey, you can’t calculate BB(6), why don’t you kill yourself?” probably depends on the specific architecture, but seems to me that the AI probably just won’t care.
I didn’t mean it to be so simplistic. I am just considering that if there is a known limitation of AI, no matter how powerful it is, that could be used as the basis of a system an AI could not circumvent. For example, if there was a shutdown system where the only way to disable it would require solving the halting problem.
If you knew how to build such shutdown system, you could probably also build one that cannot be disabled at all (e.g. would require solving a literally impossible problem, like proving that 1 = 0).