Good point. The AI would probably build the simplest or cheapest machines to do the job, so their behavior when the AI stops giving them commands would be… unspecified explicitly… so they would probably do something meaningless, which would have been meaningful if the AI would still work.
For example, they could contain a code: “if you lose signal from the AI, climb to the highest place you see, until you catch the signal again” (coded assuming they lost the signal because they were deep underground of something), then the machines would just start climbing to the tops of buildings and mountains.
But also, their code could be: “wait until you get the signal again, and while doing that, destroy any humans around you” (coded assuming those humans are probably somehow responsible for the loss of signal), in which case the machines would continue fighting.
The worst case: The AI would assume it could be destroyed (by humans, natural disaster, or anything else), so the machines would have an instruction to rebuild the AI somewhere else. Actually, this seems like a pretty likely case. The new AI would not know about the proof, so it would start fighting again. And if destroyed, a new AI would be built again, and again. -- The original AI does not have motivation to make the proof known to its possible clones.
Good point. The AI would probably build the simplest or cheapest machines to do the job, so their behavior when the AI stops giving them commands would be… unspecified explicitly… so they would probably do something meaningless, which would have been meaningful if the AI would still work.
For example, they could contain a code: “if you lose signal from the AI, climb to the highest place you see, until you catch the signal again” (coded assuming they lost the signal because they were deep underground of something), then the machines would just start climbing to the tops of buildings and mountains.
But also, their code could be: “wait until you get the signal again, and while doing that, destroy any humans around you” (coded assuming those humans are probably somehow responsible for the loss of signal), in which case the machines would continue fighting.
The worst case: The AI would assume it could be destroyed (by humans, natural disaster, or anything else), so the machines would have an instruction to rebuild the AI somewhere else. Actually, this seems like a pretty likely case. The new AI would not know about the proof, so it would start fighting again. And if destroyed, a new AI would be built again, and again. -- The original AI does not have motivation to make the proof known to its possible clones.