I don’t think it’s even that hard. Presumably an arbitrarily stronger intelligence could build arbitrarily subtle disaster-making flaws into whatever “helpful” technology/science it gives us. They could even have a generalized harmful sensation, as was discussed in another thread recently.
I don’t think it’s even that hard. Presumably an arbitrarily stronger intelligence could build arbitrarily subtle disaster-making flaws into whatever “helpful” technology/science it gives us. They could even have a generalized harmful sensation, as was discussed in another thread recently.
See the first chapter of Vinge’s “A Fire Upon the Deep” for an example of arbitrarily subtle disaster-making flaws.