[Paper] The Global Catastrophic Risks of the Possibility of Finding Alien AI During SETI

[edit: it looks like im­me­di­ately af­ter pub­lish­ing the pa­per, the jour­nal be­comes ex­tinct, so the link is no longer work­ing]

My ar­ti­cle on the topic has been fi­nally pub­lished, 10 years af­ter first draft. I have dis­cussed the prob­lem be­fore on LW. The preprint, free of pay­wall, is here.

The main differ­ence be­tween the cur­rent ver­sion and my pre­vi­ous post is that I con­cluded that such at­tack is less prob­a­ble, be­cause if we take into ac­count dis­tri­bu­tion in the Uni­verse of the naive-our-level-civ­i­liza­tions and civ­i­liza­tions which has pow­er­ful AI and are SETI-senders, when the at­tack be­come pos­si­ble, only if most naive civ­i­liza­tions go ex­tinct be­fore the cre­ation of the AI. In that case, suc­cumb­ing to a SETI-at­tack may be net pos­i­tive, as the chance that it is a mes­sage from a benev­olent alien AI be­comes our only way to es­cape in­evitable ex­tinc­tion. Any­way, we should be cau­tious, if we get any alien mes­sage, es­pe­cially if it will have de­scrip­tions of com­put­ers and pro­grams to them.