At least assuming that it has fully automated infrastructure. If it still needs humans to keep the power plants running and to do physical maintenance in the server rooms, it becomes indirectly susceptible to the bioweapons.
Yes, but the (a?) moral of That Alien Message is that a superintelligence could build its own self-sustaining infrastructure faster than one might think using methods you might not have thought of, for example, through bio/nano engineering instead of robots. Or, you know, something else we haven’t thought of.
At least assuming that it has fully automated infrastructure. If it still needs humans to keep the power plants running and to do physical maintenance in the server rooms, it becomes indirectly susceptible to the bioweapons.
Yes, but the (a?) moral of That Alien Message is that a superintelligence could build its own self-sustaining infrastructure faster than one might think using methods you might not have thought of, for example, through bio/nano engineering instead of robots. Or, you know, something else we haven’t thought of.
This is unrealistic. It assumes:
Orders of magnitude more intelligence
The actual usefulness of such intelligence in the physical world with its physical limits
The more worrying prospect is that the AI might not necessarily fear suicide. Suicidal actions are quite prevalent among humans, after all.