The Fermi paradox as evidence against the likelyhood of unfriendly AI

Edit after two weeks: Thanks to everyone involved in this very interesting discussion! I now accept that any possible differences in how UFAI and FAI might spread over the universe pale before the Fermi paradox’s evidence against the pre-existence of any of them. I enjoyed thinking about this a lot, so thanks again for considering my original argument, which follows below...

The assumptions that intelligence is substrate-independent, as well as that intelligent systems will always attempt to become more intelligent lead to the conclusion that, in the words of Paul Davies, “if we ever encounter extraterrestrial intelligence, it is overwhelmingly likely to be post-biological”.

At Less Wrong, we have this notion of unfriendly artificial intelligence—AIs that use their superior intelligence to grow themselves and maximize their own utility at the expense of humans much like we maximize our own utility at the expense of mosquitoes. Friendly AI, on the other hand, should have a positive effect on humanity. Details are beyond my comprehension or indeed rather vague, but presumably such an AI would prioritize particular elements of its home biosphere over its own interests as an agent that—aware of its own intelligence and the fact that is what helps it maximize its utility—should want to grow smarter. The distinction should make as much sense on any alien planet as it does on our own.

We know that self-replicating probes, travelling at, say, 1% of the speed of light, could colonize the entire galaxy in millions, not billions, of years. Obviously, an intelligence looking only to grow itself (and maximize paperclips or whatever) can do this much more easily than one restrained by its biological-or-similar parents. Between two alien superintelligences, one strictly self-maximizing should out-compete one that cares about things like the habitability of planets (especially its home planet) by the standards of its parents. It follows that if we ever encounter post-biological extraterrestrial intelligence, it should be expected (at least by the Less Wrong community) to be hostile.

But we havent. What does that tell us?

Our astronomical observations increasingly allow us to rule out some possible pictures of life in the rest of the galaxy. This means we can also rule out some possible explanations for the Fermi paradox. For example, until a few years ago, we didn’t know how common it was for stars to have solar systems. This created the possibility that Earth was rare because it was inside a rare solar system. Or that, as imagined in the Charles Stross novel Accelerando, a lot of planetary systems are already Matrioshka brains (which we’re speculating are the optimal substrate for a self-replicating intelligent system capable of advanced nanotechnology and interstellar travel). Now we know planetary systems, and planets, are apparently quite common. So we can rule out that Matrioshka brains are the norm.

Therefore, it very much seems like no self-replicating unfriendly artificial intelligence has arisen anywhere in the galaxy in the—very roughly − 10 billion years since intelligent life could have arisen somewhere in the galaxy. If there had, our own solar system would have been converted into its hardware already. There still could be intelligences out there ethical enough to not bother solar systems with life in them—but then they wouldn’t be unfriendly, right?

I see two possible conclusions from this. Either intelligence is incredibly rare and we’re indeed the only ones in the galaxy where unfriendly artificial intelligence is a real threat. Or intelligence is not so rare, has arisen elsewhere, but never, not even in one case, has evolved into the paperclip-maximizing behemoth that we’re trying to defend ourselves from. Both possibilities reinforce the need for AI (and astronomical) research.

Thoughts?