The Fermi paradox as evidence against the likelyhood of unfriendly AI

Edit af­ter two weeks: Thanks to ev­ery­one in­volved in this very in­ter­est­ing dis­cus­sion! I now ac­cept that any pos­si­ble differ­ences in how UFAI and FAI might spread over the uni­verse pale be­fore the Fermi para­dox’s ev­i­dence against the pre-ex­is­tence of any of them. I en­joyed think­ing about this a lot, so thanks again for con­sid­er­ing my origi­nal ar­gu­ment, which fol­lows be­low...

The as­sump­tions that in­tel­li­gence is sub­strate-in­de­pen­dent, as well as that in­tel­li­gent sys­tems will always at­tempt to be­come more in­tel­li­gent lead to the con­clu­sion that, in the words of Paul Davies, “if we ever en­counter ex­trater­res­trial in­tel­li­gence, it is over­whelm­ingly likely to be post-biolog­i­cal”.

At Less Wrong, we have this no­tion of un­friendly ar­tifi­cial in­tel­li­gence—AIs that use their su­pe­rior in­tel­li­gence to grow them­selves and max­i­mize their own util­ity at the ex­pense of hu­mans much like we max­i­mize our own util­ity at the ex­pense of mosquitoes. Friendly AI, on the other hand, should have a pos­i­tive effect on hu­man­ity. De­tails are be­yond my com­pre­hen­sion or in­deed rather vague, but pre­sum­ably such an AI would pri­ori­tize par­tic­u­lar el­e­ments of its home bio­sphere over its own in­ter­ests as an agent that—aware of its own in­tel­li­gence and the fact that is what helps it max­i­mize its util­ity—should want to grow smarter. The dis­tinc­tion should make as much sense on any alien planet as it does on our own.

We know that self-repli­cat­ing probes, trav­el­ling at, say, 1% of the speed of light, could colonize the en­tire galaxy in mil­lions, not billions, of years. Ob­vi­ously, an in­tel­li­gence look­ing only to grow it­self (and max­i­mize pa­per­clips or what­ever) can do this much more eas­ily than one re­strained by its biolog­i­cal-or-similar par­ents. Between two alien su­per­in­tel­li­gences, one strictly self-max­i­miz­ing should out-com­pete one that cares about things like the hab­it­a­bil­ity of planets (es­pe­cially its home planet) by the stan­dards of its par­ents. It fol­lows that if we ever en­counter post-biolog­i­cal ex­trater­res­trial in­tel­li­gence, it should be ex­pected (at least by the Less Wrong com­mu­nity) to be hos­tile.

But we havent. What does that tell us?

Our as­tro­nom­i­cal ob­ser­va­tions in­creas­ingly al­low us to rule out some pos­si­ble pic­tures of life in the rest of the galaxy. This means we can also rule out some pos­si­ble ex­pla­na­tions for the Fermi para­dox. For ex­am­ple, un­til a few years ago, we didn’t know how com­mon it was for stars to have so­lar sys­tems. This cre­ated the pos­si­bil­ity that Earth was rare be­cause it was in­side a rare so­lar sys­tem. Or that, as imag­ined in the Charles Stross novel Ac­celerando, a lot of plane­tary sys­tems are already Ma­tri­oshka brains (which we’re spec­u­lat­ing are the op­ti­mal sub­strate for a self-repli­cat­ing in­tel­li­gent sys­tem ca­pa­ble of ad­vanced nan­otech­nol­ogy and in­ter­stel­lar travel). Now we know plane­tary sys­tems, and planets, are ap­par­ently quite com­mon. So we can rule out that Ma­tri­oshka brains are the norm.

There­fore, it very much seems like no self-repli­cat­ing un­friendly ar­tifi­cial in­tel­li­gence has arisen any­where in the galaxy in the—very roughly − 10 billion years since in­tel­li­gent life could have arisen some­where in the galaxy. If there had, our own so­lar sys­tem would have been con­verted into its hard­ware already. There still could be in­tel­li­gences out there eth­i­cal enough to not bother so­lar sys­tems with life in them—but then they wouldn’t be un­friendly, right?

I see two pos­si­ble con­clu­sions from this. Either in­tel­li­gence is in­cred­ibly rare and we’re in­deed the only ones in the galaxy where un­friendly ar­tifi­cial in­tel­li­gence is a real threat. Or in­tel­li­gence is not so rare, has arisen el­se­where, but never, not even in one case, has evolved into the pa­per­clip-max­i­miz­ing be­he­moth that we’re try­ing to defend our­selves from. Both pos­si­bil­ities re­in­force the need for AI (and as­tro­nom­i­cal) re­search.

Thoughts?