To throw one out there, perhaps the first superintelligence was created by a people very concerned about AI risk and friendliness and one of it’s goals is simply to subtly suppress (by a very broad definition) unfriendly AI’s in the rest of the universe while minimizing disruption otherwise.
To throw one out there, perhaps the first superintelligence was created by a people very concerned about AI risk and friendliness and one of it’s goals is simply to subtly suppress (by a very broad definition) unfriendly AI’s in the rest of the universe while minimizing disruption otherwise.