Risks of downloading alien AI via SETI search

Alexei Turchin. Risks of down­load­ing alien AI via SETI search

Ab­stract: This ar­ti­cle ex­am­ines risks as­so­ci­ated with the pro­gram of pas­sive search for alien sig­nals (SETI—the Search for Ex­tra-Ter­res­trial In­tel­li­gence). In this pa­per we pro­pose a sce­nario of pos­si­ble vuln­er­a­bil­ity and dis­cuss the rea­sons why the pro­por­tion of dan­ger­ous sig­nals to harm­less ones can be dan­ger­ously high. This ar­ti­cle does not pro­pose to ban SETI pro­grams, and does not in­sist on the in­evita­bil­ity of SETI-trig­gered dis­aster. More­over, it gives the pos­si­bil­ity of how SETI can be a sal­va­tion for mankind.

The idea that pas­sive SETI can be dan­ger­ous is not new. Fred Hoyle sug­gested in the story “A for An­dromeda” a scheme of alien at­tack through SETI sig­nals. Ac­cord­ing to the plot, as­tronomers re­ceive an alien sig­nal, which con­tains a de­scrip­tion of a com­puter and a com­puter pro­gram for it. This ma­chine cre­ates a de­scrip­tion of the ge­netic code which leads to the cre­ation of an in­tel­li­gent crea­ture – a girl dubbed An­dromeda, which, work­ing to­gether with the com­puter, cre­ates ad­vanced tech­nol­ogy for the mil­i­tary. The ini­tial sus­pi­cion of alien in­tent is over­come by the greed for the tech­nol­ogy the aliens can provide. How­ever, the main char­ac­ters re­al­ize that the com­puter acts in a man­ner hos­tile to hu­man civ­i­liza­tion and de­stroy the com­puter, and the girl dies.

This sce­nario is fic­tion, be­cause most sci­en­tists do not be­lieve in the pos­si­bil­ity of a strong AI, and, sec­ondly, be­cause we do not have the tech­nol­ogy that en­ables syn­the­sis of new liv­ing or­ganisms solely from its’ ge­netic code. Or at least, we have not un­til re­cently. Cur­rent tech­nol­ogy of se­quenc­ing and DNA syn­the­sis, as well as progress in de­vel­op­ing a code of DNA mod­ified with an­other set of the alpha­bet, in­di­cate that in 10 years the task of re-es­tab­lish­ing a liv­ing be­ing from com­puter codes sent from space in the form com­puter codes might be fea­si­ble.

Hans Mo­ravec in the book “Mind Chil­dren” (1988) offers a similar type of vuln­er­a­bil­ity: down­load­ing a com­puter pro­gram from space via SETI, which will have ar­tifi­cial in­tel­li­gence, promis­ing new op­por­tu­ni­ties for the owner and af­ter fool­ing the hu­man host, self-repli­cat­ing by the mil­lions of copies and de­stroy­ing the hu­man host, fi­nally us­ing the re­sources of the se­cured planet to send its ‘child’ copies to mul­ti­ple planets which con­sti­tute its’ fu­ture prey. Such a strat­egy would be like a virus or a dig­ger wasp—hor­rible, but plau­si­ble. In the same di­rec­tion are R. Car­ri­gan’s ideas; he wrote an ar­ti­cle “SETI-hacker”, and ex­pressed fears that un­filtered sig­nals from space are loaded on mil­lions of not se­cure com­put­ers of SETI-at-home pro­gram. But he met tough crit­i­cism from pro­gram­mers who pointed out that, first, data fields and pro­grams are in di­vided re­gions in com­put­ers, and sec­ondly, com­puter codes, in which are writ­ten pro­grams, are so unique that it is im­pos­si­ble to guess their struc­ture suffi­ciently to hack them blindly (with­out prior knowl­edge).

After a while Car­ri­gan is­sued a sec­ond ar­ti­cle—“Should po­ten­tial SETI sig­nals be de­con­tam­i­nated?” http://​​home.fnal.gov/​​~car­ri­gan/​​SETI/​​SETI%20De­con%20Aus­tralia%20poster%20pa­per.pdf, which I’ve trans­lated into Rus­sian. In it, he pointed to the ease of trans­fer­ring gi­gabytes of data on in­ter­stel­lar dis­tances, and also in­di­cated that the in­ter­stel­lar sig­nal may con­tain some kind of bait that will en­courage peo­ple to col­lect a dan­ger­ous de­vice ac­cord­ing to the de­signs. Here Car­ri­gan not give up his be­lief in the pos­si­bil­ity that an alien virus could di­rectly in­fected earth’s com­put­ers with­out hu­man ‘trans­la­tion’ as­sis­tance. (We may note with pass­ing alarm that the prevalence of hu­mans ob­sessed with death—as Fred Saber­ha­gen pointed out in his idea of ‘goodlife’—means that we can­not en­tirely dis­count the pos­si­bil­ity of de­mented ‘vol­un­teers’ –hu­man traitors ea­ger to as­sist such a fatal in­va­sion) As a pos­si­ble con­fir­ma­tion of this idea, Car­ri­gan has shown that it is pos­si­ble eas­ily re­verse en­g­ineer lan­guage of com­puter pro­gram—that is, based on the text of the pro­gram it is pos­si­ble to guess what it does, and then re­store the value of op­er­a­tors.

In 2006, E. Yud­kowsky wrote an ar­ti­cle “AI as a pos­i­tive and a nega­tive fac­tor of global risk”, in which he demon­strated that it is very likely that it is pos­si­ble rapidly evolv­ing uni­ver­sal ar­tifi­cial in­tel­li­gence which high in­tel­li­gence would be ex­tremely dan­ger­ous if it was pro­grammed in­cor­rectly, and, fi­nally, that the oc­cur­rence of such AI and the risks as­so­ci­ated with it sig­nifi­cantly un­der­val­ued. In ad­di­tion, Yud­kowsky in­tro­duced the no­tion of “Seed AI”—em­bryo AI—that is a min­i­mum pro­gram ca­pa­ble of run­away self-im­prove­ment with un­changed pri­mary goal. The size of Seed AI can be on the close or­der of hun­dreds of kilo­bytes. (For ex­am­ple, a typ­i­cal rep­re­sen­ta­tive of Seed AI is a hu­man baby, whose part of genome re­spon­si­ble for the brain would rep­re­sent ~ 3% of to­tal genes of a per­son with a vol­ume of 500 megabytes, or 15 megabytes, but given the share of garbage DNA is even less.)

In the be­gin­ning, let us as­sume that in the Uni­verse there is an ex­trater­res­trial civ­i­liza­tion, which in­tends to send such a mes­sage, which will en­able it to ob­tain power over Earth, and con­sider this sce­nario. In the next chap­ter we will con­sider how re­al­is­tic is that an­other civ­i­liza­tion would want to send such a mes­sage.

First, we note that in or­der to prove the vuln­er­a­bil­ity, it is enough to find just one hole in se­cu­rity. How­ever, in or­der to prove safety, you must re­move ev­ery pos­si­ble hole. The com­plex­ity of these tasks varies on many or­ders of mag­ni­tude that are well known to ex­perts on com­puter se­cu­rity. This dis­tinc­tion has led to the fact that al­most all com­puter sys­tems have been bro­ken (from Enigma to iPOD). I will now try to demon­strate one pos­si­ble, and even, in my view, likely, vuln­er­a­bil­ity of SETI pro­gram. How­ever, I want to cau­tion the reader from the thought that if he finds er­rors in my dis­cus­sions, it au­to­mat­i­cally proves the safety of SETI pro­gram. Se­condly, I would also like to draw the at­ten­tion of the reader, that I am a man with an IQ of 120 who spent all of a month of think­ing on the vuln­er­a­bil­ity prob­lem. We need not re­quire an alien su­per civ­i­liza­tion with IQ of 1000000 and con­tem­pla­tion time of mil­lions of years to sig­nifi­cantly im­prove this al­gorithm—we have no real idea what an IQ of 300 or even-a mere IQ of 100 with much larger men­tal ‘RAM’ (–the abil­ity to load a ma­jor ar­chi­tec­tural task into mind and keep it there for weeks while pro­cess­ing) could ac­com­plish to find a much more sim­ple and effec­tive way. Fi­nally, I pro­pose one pos­si­ble al­gorithm and then we will dis­cuss briefly the other op­tions.

In our dis­cus­sions we will draw on the Coper­ni­can prin­ci­ple, that is, the be­lief that we are or­di­nary ob­servers in nor­mal situ­a­tions. There­fore, the Earth’s civ­i­liza­tion is an or­di­nary civ­i­liza­tion de­vel­op­ing nor­mally. (Read­ers of tabloid news­pa­pers may ob­ject!)

Al­gorithm of SETI attack

1. The sender cre­ates a kind of sig­nal bea­con in space, which re­veals that its mes­sage is clearly ar­tifi­cial. For ex­am­ple, this may be a star with a Dyson sphere, which has holes or mir­rors, al­ter­nately opened and closed. There­fore, the en­tire star will blink of a pe­riod of a few min­utes—faster is not pos­si­ble be­cause of the vari­able dis­tance be­tween differ­ent open­ings. (Even syn­chro­nized with an atomic clock ac­cord­ing to a rigid sched­ule, the speed of light limit means that there are limits to the speed and re­ac­tion time of co­or­di­nat­ing large scale sys­tems) Nev­er­the­less, this bea­con can be seen at a dis­tance of mil­lions of light years. There are pos­si­ble other types of lighthouses, but the im­por­tant fact that the bea­con sig­nal could be viewed at long dis­tances.

2. Nearer to Earth is a ra­dio bea­con with a much weaker sig­nal, but more in­for­ma­tion sat­u­rated. The lighthouse draws at­ten­tion to this ra­dio source. This source pro­duces some stream of bi­nary in­for­ma­tion (i.e. the se­quence of 0 and 1). About the ob­jec­tion that the in­for­ma­tion would con­tain noises, I note that the most ob­vi­ous (un­der­stand­able to the re­cip­i­ent’s side) means to re­duce noise is the sim­ple rep­e­ti­tion of the sig­nal in a cir­cle.

3. The most sim­ple way to con­vey mean­ingful in­for­ma­tion us­ing a bi­nary sig­nal is send­ing of images. First, be­cause eye struc­tures in the Earth’s biolog­i­cal di­ver­sity ap­peared in­de­pen­dently 7 times, it means that the pre­sen­ta­tion of a three-di­men­sional world with the help of 2D images is prob­a­bly uni­ver­sal, and is al­most cer­tainly un­der­stand­able to all crea­tures who can build a ra­dio re­ceiver.

4. Se­condly, the 2D images are not too difficult to en­code in bi­nary sig­nals. To do so, let us use the same sys­tem, which was used in the first TV cam­eras, namely, a sys­tem of pro­gres­sive and frame rate. At the end of each time frame images store bright light, re­peated af­ter each line, that is, through an equal num­ber of bits. Fi­nally, at the end of each frame is placed an­other sig­nal in­di­cat­ing the end of the frame, and re­peated af­ter each frame. (This may form, or may not form a con­tin­u­ous film.) This may look like this:

01010111101010 11111111111111111

01111010111111 11111111111111111

11100111100000 11111111111111111

Here is the end line sig­nal of ev­ery of 25 units. Frame end sig­nal may ap­pear ev­ery, for ex­am­ple, 625 units.

5. Clearly, a sender civ­i­liza­tion- should be ex­tremely in­ter­ested that we un­der­stand their sig­nals. On the other hand, peo­ple will share an ex­treme de­sire to de­crypt the sig­nal. There­fore, there is no doubt that the pic­ture will be rec­og­nized.

6. Us­ing images and movies can con­vey a lot of in­for­ma­tion, they can even train in learn­ing their lan­guage, and show their world. It is ob­vi­ous that many can ar­gue about how such films will be un­der­stand­able. Here, we will fo­cus on the fact that if a cer­tain civ­i­liza­tion sends ra­dio sig­nals, and the other takes them, so they have some shared knowl­edge. Namely, they know ra­dio tech­nique—that is they know tran­sis­tors, ca­pac­i­tors, and re­sis­tors. Th­ese ra­dio-parts are quite typ­i­cal so that they can be eas­ily rec­og­nized in the pho­tographs. (For ex­am­ple, parts shown, in cut­away view, and in se­quen­tial as­sem­bly stages— or in an elec­tri­cal schematic whose con­nec­tions will ar­gue for the na­ture of the com­po­nents in­volved).

7. By send­ing pho­tos de­pict­ing ra­dio-parts on the right side, and on the left—their sym­bols, it is easy to con­vey a set of signs in­di­cat­ing elec­tri­cal cir­cuit. (Roughly the same could be trans­ferred and the log­i­cal el­e­ments of com­put­ers.)

8. Then, us­ing these sym­bols the sender civ­i­liza­tion- trans­mits blueprints of their sim­plest com­puter. The sim­plest of com­put­ers from hard­ware point of view is the Post-ma­chine. It has only 6 com­mands and a tape data recorder. Its full elec­tric scheme will con­tain only a few tens of tran­sis­tors or logic el­e­ments. It is not difficult to send blueprints of Post ma­chine.

9. It is im­por­tant to note that all com­put­ers at the level of al­gorithms are Tur­ing-com­pat­i­ble. That means that ex­trater­res­trial com­put­ers at the ba­sic level are com­pat­i­ble with any earth com­puter. Tur­ing-com­pat­i­bil­ity is a math­e­mat­i­cal uni­ver­sal­ity as the Pythagorean the­o­rem. Even the Bab­bage me­chan­i­cal ma­chine, de­signed in the early 19th cen­tury, was Tur­ing-com­pat­i­ble.

10. Then the sender civ­i­liza­tion- be­gins to trans­mit pro­grams for that ma­chine. De­spite the fact that the com­puter is very sim­ple, it can im­ple­ment a pro­gram of any difficulty, al­though it will take very long in com­par­i­son with more com­plex pro­grams for the same com­puter. It is un­likely that peo­ple will be re­quired to build this com­puter phys­i­cally. They can eas­ily em­u­late it within any mod­ern com­puter, so that it will be able to perform trillions of op­er­a­tions per sec­ond, so even the most com­plex pro­gram will be car­ried out on it quite quickly. (It is a pos­si­ble in­terim step: a prim­i­tive com­puter gives a de­scrip­tion of a more com­plex and fast com­puter and then run on it.)

11. So why peo­ple would cre­ate this com­puter, and run its pro­gram? Per­haps, in ad­di­tion to the ac­tual com­puter schemes and pro­grams in the com­mu­ni­ca­tion must be some kind of “bait”, which would have led the peo­ple to cre­ate such an alien com­puter and to run pro­grams on it and to provide to it some sort of com­puter data about the ex­ter­nal world –Earth out­side the com­puter. There are two gen­eral pos­si­ble baits—temp­ta­tions and dan­gers:

a). For ex­am­ple, per­haps peo­ple re­ceive the fol­low­ing offer– lets call it “The hu­man­i­tar­ian aid con (de­ceit)”. Sen­ders of an “hon­est sig­nal” SETI mes­sage warn that the sent pro­gram is Ar­tifi­cial in­tel­li­gence, but lie about its goals. That is, they ar­gue that this is a “gift” which will help us to solve all med­i­cal and en­ergy prob­lems. But it is a Tro­jan horse of most malev­olent in­tent. It is too use­ful not to use. Even­tu­ally it be­comes in­dis­pens­able. And then ex­actly when so­ciety be­comes de­pen­dent upon it, the foun­da­tion of so­ciety—and so­ciety it­self—is over­turned…

b). “The temp­ta­tion of ab­solute power con”—in this sce­nario, they offer spe­cific trans­ac­tion mes­sage to re­cip­i­ents, promis­ing power over other re­cip­i­ents. This be­gins a ‘race to the bot­tom’ that leads to run­away be­tray­als and power seek­ing counter-moves, end­ing with a world dic­ta­tor­ship, or worse, a de­stroyed world dic­ta­tor­ship on an empty world….

c). “Un­known threat con”—in this sce­nario bait senders re­port that a cer­tain threat hangs over on hu­man­ity, for ex­am­ple, from an­other en­emy civ­i­liza­tion, and to pro­tect your­self, you should join the pu­ta­tive “Galac­tic Alli­ance” and build a cer­tain in­stal­la­tion. Or, for ex­am­ple, they sug­gest perform­ing a cer­tain class of phys­i­cal ex­per­i­ments on the ac­cel­er­a­tor and send­ing out this mes­sage to oth­ers in the Galaxy. (Like a chain let­ter) And we should send this mes­sage be­fore we ig­nite the ac­cel­er­a­tor, please…

d). “Tire­less re­searcher con”—here senders ar­gue that post­ing mes­sages is the cheap­est way to ex­plore the world. They ask us to cre­ate AI that will study our world, and send the re­sults back. It does rather more than that, of course…

12. How­ever, the main threat from alien mes­sages with ex­e­cutable code is not the bait it­self, but that this mes­sage can be well known to a large num­ber of in­de­pen­dent groups of peo­ple. First, there will always be some­one who is more sus­cep­ti­ble to the bait. Se­condly, say, the world will know that alien mes­sage em­anates from the An­dromeda galaxy, and the Amer­i­cans have already been re­ceived and maybe are try­ing to de­ci­pher it. Of course, then all other coun­tries will run to build ra­dio telescopes and point them on An­dromeda galaxy, as will be afraid to miss a “strate­gic ad­van­tage”. And they will find the mes­sage and see that there is a pro­posal to grant om­nipo­tence to those will­ing to col­lab­o­rate. In do­ing so, they will not know, if the Amer­i­cans would take ad­van­tage of them or not, even if the Amer­i­cans will swear that they don’t run the mal­i­cious code, and beg oth­ers not to do so. More­over, such oaths, and ap­peals will be per­ceived as a sign that the Amer­i­cans have already re­ceived an in­cred­ible ex­trater­res­trial ad­van­tage, and try to de­prive “pro­gres­sive mankind” of them. While most will un­der­stand the dan­ger of launch­ing alien code, some­one will be will­ing to risk it. More­over there will be a game in the spirit of “win­ner take all”, as well be in the case of open­ing AI, as Yud­kowsky shows in de­tail. So, the bait is not dan­ger­ous, but the plu­ral­ity of re­cip­i­ents. If the alien mes­sage is posted to the In­ter­net (and its size, suffi­cient to run Seed AI can be less than gi­gabytes along with a de­scrip­tion of the com­puter pro­gram, and the bait), here we have a clas­sic ex­am­ple of “knowl­edge” of mass de­struc­tion, as said Bill Joy, mean­ing the recipes genomes of dan­ger­ous biolog­i­cal viruses. If aliens sent code will be available to tens of thou­sands of peo­ple, then some­one will start it even with­out any bait out of sim­ple cu­ri­os­ity We can’t count on ex­ist­ing SETI pro­to­cols, be­cause dis­cus­sion on METI (send­ing of mes­sages to ex­trater­res­trial) has shown that SETI com­mu­nity is not mono­lithic on im­por­tant ques­tions. Even a sim­ple fact that some­thing was found could leak and en­courage search from out­siders. And the co­or­di­nates of the point in sky would be enough.

13. Since peo­ple don’t have AI, we al­most cer­tainly greatly un­der­es­ti­mate its power and over­es­ti­mate our abil­ity to con­trol it. The com­mon idea is that “it is enough to pull the power cord to stop an AI” or place it in a black box to avoid any as­so­ci­ated risks. Yud­kowsky shows that AI can de­ceive us as an adult does a child. If AI dips into the In­ter­net, it can quickly sub­due it as a whole, and also taught all nec­es­sary about en­tire earthly life. Quickly—means the max­i­mum hours or days. Then the AI can cre­ate ad­vanced nan­otech­nol­ogy, buy com­po­nents and raw ma­te­ri­als (on the In­ter­net, he can eas­ily make money and or­der goods with de­liv­ery, as well as to re­cruit peo­ple who would re­ceive them, fol­low­ing the in­struc­tions of their well pay­ing but ‘un­seen em­ployer’, not know­ing who—or rather, what—- they are serv­ing). Yud­kowsky leads one of the pos­si­ble sce­nar­ios of this stage in de­tail and as­sesses that AI needs only weeks to crack any se­cu­rity and get its own phys­i­cal in­fras­truc­ture.

“Con­sider, for clar­ity, one pos­si­ble sce­nario, in which Alien AI (AAI) can seize power on the Earth. As­sume that it promises im­mor­tal­ity to any­one who cre­ates a com­puter on the blueprints sent to him and start the pro­gram with AI on that com­puter. When the pro­gram starts, it says: “OK, buddy, I can make you im­mor­tal, but for this I need to know on what ba­sis your body works. Provide me please ac­cess to your database. And you con­nect the de­vice to the In­ter­net, where it was grad­u­ally be­ing de­vel­oped and learns what it needs and pe­cu­liar­i­ties of hu­man biol­ogy. (Here it is pos­si­ble for it es­cape to the In­ter­net, but we omit de­tails since this is not the main point) Then the AAI says: “I know how you be­come biolog­i­cally im­mor­tal. It is nec­es­sary to re­place ev­ery cell of your body with nanobiorobot. And for­tu­nately, in the biol­ogy of your body there is al­most noth­ing spe­cial that would block bio-im­moral­ity.. Many other or­ganisms in the uni­verse are also us­ing DNA as a car­rier of in­for­ma­tion. So I know how to pro­gram the DNA so as to cre­ate ge­net­i­cally mod­ified bac­te­ria that could perform the func­tions of any cell. I need ac­cess to the biolog­i­cal lab­o­ra­tory, where I can perform a few ex­per­i­ments, and it will cost you a mil­lion of your dol­lars.” You rent a lab­o­ra­tory, hire sev­eral em­ploy­ees, and fi­nally the AAI is­sues a table with its’ solu­tion of cus­tom de­signed DNA, which are or­dered in the lab­o­ra­tory by au­to­mated ma­chine syn­the­sis of DNA. http://​​en.wikipe­dia.org/​​wiki/​​DNA_se­quenc­ing Then they im­plant the DNA into yeast, and af­ter sev­eral un­suc­cess­ful ex­per­i­ments they cre­ate a ra­dio guided bac­te­ria (short­hand: This is not truly a bac­terium, since it ap­pears all or­ganel­les and nu­cleus; also ‘ra­dio’ is short­hand for re­mote con­trol­led; a far more likely com­mu­ni­ca­tion mechanism would be mod­u­lated sonic im­pulses) , which can syn­the­size a new DNA-based code based on com­mands from out­side. Now the AAI has achieved in­de­pen­dence from hu­man ‘fil­ter­ing’ of its’ true com­mands, be­cause the bac­terium has in effect its own re­mote con­trol­led se­quencers (self-re­pro­duc­ing to boot!). Now the AAI can trans­form and syn­the­size sub­stances os­ten­si­bly in­tro­duced into test tubes for a be­nign test, and use them for a malev­olent pur­pose., Ob­vi­ously, at this mo­ment Alien AI is ready to launch an at­tack against hu­man­ity. He can trans­fer him­self to the level of nano-com­puter so that the source com­puter can be dis­con­nected. After that AAI spray­ing some of sub­or­di­nate bac­te­ria in the air, which also have AAI, and they grad­u­ally are spread across the planet, im­per­cep­ti­bly pen­e­trates into all liv­ing be­ings, and then start by the timer to di­vide in­definitely, as gray goo, and de­stroy all liv­ing be­ings. Once they are de­stroyed, Alien AI can be­gin to build their own in­fras­truc­ture for the trans­mis­sion of ra­dio mes­sages into space. Ob­vi­ously, this fic­tion­al­ized sce­nario is not unique: for ex­am­ple, AAI may seize power over nu­clear weapons, and com­pel peo­ple to build ra­dio trans­mit­ters un­der the threat of at­tack. Be­cause of pos­si­bly vast AAI ex­pe­rience and in­tel­li­gence, he can choose the most ap­pro­pri­ate way in any ex­ist­ing cir­cum­stances. (Added by Frei­d­lan­der: Imag­ine a CIA or FSB like agency with equip­ment cen­turies into the fu­ture, in­tro­duced to a prim­i­tive cul­ture with­out con­cept of re­mote scan­ning, codes, the en­tire field­craft of spy­ing. Hu­man­ity might never know what hit it, be­cause the AAI might be many cen­turies if not mil­len­nia bet­ter armed than we (in the sense of us­able mil­i­tary in­ven­tions and tech­niques ).

14. After that, this SETI-AI does not need peo­ple to re­al­ize any of its goals. This does not mean that it would seek to de­stroy them, but it may want to pre-empt if peo­ple will fight it—and they will.

15. Then this SETI-AI can do a lot of things, but more im­por­tantly, that it should do—is to con­tinue the trans­fer of its com­mu­ni­ca­tions-gen­er­ated-em­bryos to the rest of the Uni­verse. To do so, he will prob­a­bly turn the mat­ter in the so­lar sys­tem in the same trans­mit­ter as the one that sent him. In do­ing so the Earth and its’ peo­ple would be a dis­pos­able source of ma­te­ri­als and parts—pos­si­bly on a molec­u­lar scale.

So, we ex­am­ined a pos­si­ble sce­nario of at­tack, which has 15 stages. Each of these stages is log­i­cally con­vinc­ing and could be crit­i­cized and pro­tected sep­a­rately. Other at­tack sce­nar­ios are pos­si­ble. For ex­am­ple, we may think that the mes­sage is not sent di­rectly to us but is some­one to some­one else’s cor­re­spon­dence and try to de­ci­pher it. And this will be, in fact, bait.

But not only dis­tri­bu­tion of ex­e­cutable code can be dan­ger­ous. For ex­am­ple, we can re­ceive some sort of “use­ful” tech­nol­ogy that re­ally should lead us to dis­aster (for ex­am­ple, in the spirit of the mes­sage “quickly shrink 10 kg of plu­to­nium, and you will have a new source of en­ergy” …but with plane­tary, not lo­cal con­se­quences…). Such a mailing could be done by a cer­tain “civ­i­liza­tion” in ad­vance to de­stroy com­peti­tors in the space. It is ob­vi­ous that those who re­ceive such mes­sages will pri­mar­ily seek tech­nol­ogy for mil­i­tary use.

Anal­y­sis of pos­si­ble goals

We now turn to the anal­y­sis of the pur­poses for which cer­tain su­per civ­i­liza­tions could carry out such an at­tack.

1. We must not con­fuse the con­cept of a su­per-civ­i­liza­tion with the hope for su­perkind­ness of civ­i­liza­tion. Ad­vanced does not nec­es­sar­ily mean mer­ciful. More­over, we should not ex­pect any­thing good from ex­trater­res­trial ‘kind­ness’. This is well writ­ten in Stru­gatsky’s novel “Waves stop wind.” What­ever the goal of im­pos­ing su­per-civ­i­liza­tion upon us , we have to be their in­fe­ri­ors in ca­pa­bil­ity and in civ­i­liza­tional ro­bust­ness even if their in­ten­tions are well.. The his­tor­i­cal ex­am­ple: The ac­tivi­ties of Chris­tian mis­sion­ar­ies, de­stroy­ing tra­di­tional re­li­gion. More­over, we can bet­ter un­der­stand purely hos­tile ob­jec­tives. And if the SETI at­tack suc­ceeds, it may be only a pre­lude to do­ing us more ‘fa­vors’ and ‘up­grades’ un­til there is scarcely any­thing hu­man left of us even if we do sur­vive…

2. We can di­vide all civ­i­liza­tions into the twin classes of naive and se­ri­ous. Se­ri­ous civ­i­liza­tions are aware of the SETI risks, and have got their own pow­er­ful AI, which can re­sist alien hacker at­tacks. Naive civ­i­liza­tions, like the pre­sent Earth, already pos­sess the means of long-dis­tance hear­ing in space and com­put­ers, but do not yet pos­sess AI, and are not aware of the risks of AI-SETI. Prob­a­bly ev­ery civ­i­liza­tion has its stage of be­ing “naive”, and it is this phase then it is most vuln­er­a­ble to SETI at­tack. And per­haps this phase is very short. Since the pe­riod of the out­break and spread of ra­dio telescopes to pow­er­ful com­put­ers that could cre­ate AI can be only a few tens of years. There­fore, the SETI at­tack must be set at such a civ­i­liza­tion. This is not a pleas­ant thought, be­cause we are among the vuln­er­a­ble.

3. If trav­el­ing with su­per-light speeds is not pos­si­ble, the spread of civ­i­liza­tion through SETI at­tacks is the fastest way to con­quer­ing space. At large dis­tances, it will provide sig­nifi­cant tem­po­rary gains com­pared with any kind of ships. There­fore, if two civ­i­liza­tions com­pete for mas­tery of space, the one that fa­vored SETI at­tack will win.

4. The most im­por­tant thing is that it is enough to be­gin a SETI at­tack just once, as it goes in a self-repli­cat­ing the wave through­out the Uni­verse, strik­ing more and more naive civ­i­liza­tions. For ex­am­ple, if we have a mil­lion harm­less nor­mal biolog­i­cal viruses and one dan­ger­ous, then once they get into the body, we will get trillions of copies of the dan­ger­ous virus, and still only a mil­lion safe viruses. In other words, it is enough that if one of billions of civ­i­liza­tions starts the pro­cess and then it be­comes un­stop­pable through­out the Uni­verse. Since it is al­most at the speed of light, coun­ter­mea­sures will be al­most im­pos­si­ble.

5. Fur­ther, the de­liv­ery of SETI mes­sages will be a pri­or­ity for the virus that in­fected a civ­i­liza­tion, and it will spend on it most of its en­ergy, like a biolog­i­cal or­ganism spends on re­pro­duc­tion—that is tens of per­cent. But Earth’s civ­i­liza­tion spends on SETI only a few tens of mil­lions of dol­lars, that is about one mil­lionth of our re­sources, and this pro­por­tion is un­likely to change much for the more ad­vanced civ­i­liza­tions. In other words, an in­fected civ­i­liza­tion will pro­duce a mil­lion times more SETI sig­nals than a healthy one. Or, to say in an­other way, if in the Galaxy are one mil­lion healthy civ­i­liza­tions, and one in­fected, then we will have equal chances to en­counter a sig­nal from healthy or con­tam­i­nated.

6. More­over, there are no other rea­son­able prospects to dis­tribute its code in space ex­cept through self-repli­ca­tion.

7. More­over, such a pro­cess could be­gin by ac­ci­dent—for ex­am­ple, in the be­gin­ning it was just a re­search pro­ject, which was in­tended to send the re­sults of its (in­no­cent) stud­ies to the ma­ter­nal civ­i­liza­tion, not caus­ing harm to the host civ­i­liza­tion, then this pro­cess be­came “can­cer” be­cause of cer­tain pro­pog­a­tive faults or mu­ta­tions.

8. There is noth­ing un­usual in such be­hav­ior. In any medium, there are viruses – there are viruses in biol­ogy, in com­puter net­works—com­puter viruses, in con­ver­sa­tion—meme. We do not ask why na­ture wanted to cre­ate a biolog­i­cal virus.

9. Travel through SETI at­tacks is much cheaper than by any other means. Namely, a civ­i­liza­tion in An­dromeda can si­mul­ta­neously send a sig­nal to 100 billion stars in our galaxy. But each space ship would cost billions, and even if free, would be slower to reach all the stars of our Galaxy.

10. Now we list sev­eral pos­si­ble goals of a SETI at­tack, just to show the va­ri­ety of mo­tives.

  • To study the uni­verse. After ex­e­cut­ing the code re­search probes are cre­ated to gather sur­vey and send back in­for­ma­tion.

  • To en­sure that there are no com­pet­ing civ­i­liza­tions. All of their em­bryos are de­stroyed. This is pre­emp­tive war on an in­dis­crim­i­nate ba­sis.

  • To pre­empt the other com­pet­ing su­per­civ­i­liza­tion (yes, in this sce­nario there are two!) be­fore it can take ad­van­tage of this re­source.

  • This is done in or­der to pre­pare a solid base for the ar­rival of space­craft. This makes sense if su­per civ­i­liza­tion is very far away, and con­se­quently, the gap be­tween the speed of light and near-light speeds of its ships (say, 0.5 c) gives a mil­len­nium differ­ence.

  • The goal is to achieve im­mor­tal­ity. Car­ri­gan showed that the amount of hu­man per­sonal mem­ory is on the or­der of 2.5 gi­gabytes, so a few ex­abytes (1 ex­abyte = 1 073 741 824 gi­gabytes) for­ward­ing the in­for­ma­tion can send the en­tire civ­i­liza­tion. (You may ad­just the units ac­cord­ing to how big you like your su­per-civ­i­liza­tions!)

  • Fi­nally we con­sider illog­i­cal and in­com­pre­hen­si­ble (to us) pur­poses, for ex­am­ple, as a work of art, an act of self-ex­pres­sion or toys. Or per­haps an in­sane ri­valry be­tween two fac­tions. Or some­thing we sim­ply can­not un­der­stand (For ex­am­ple, ex­trater­res­trial will not un­der­stand why the Amer­i­cans have stuck a flag into the Moon. Was it worth­while to fly over 300000 km to in­stall painted steel?)

11. As­sum­ing sig­nals prop­a­gated billions of light years dis­tant in the Uni­verse, the area sus­cep­ti­ble to wide­spread SETI at­tack, is a sphere with a ra­dius of sev­eral billion light years. In other words, it would be suffi­cient to find a one “bad civ­i­liza­tion” in the light cone of a height of sev­eral billion years old, that is, that in­cludes billions of galax­ies from which we are in dan­ger of SETI at­tack. Of course, this is only true, if the av­er­age den­sity of civ­i­liza­tion is at least one in the galaxy. This is an in­ter­est­ing pos­si­bil­ity in re­la­tion to Fermi’s Para­dox.

16. As the depth of scan­ning the sky rises lin­early, the vol­ume of space and the num­ber of stars that we see in­creases by the cube of that num­ber. This means that our chances to stum­ble on a SETI sig­nal non­lin­ear grow by fast curve.

17. It is pos­si­ble that when we stum­ble upon sev­eral differ­ent mes­sages from the skies, which re­fute one an­other in a spirit of: “do not listen to them, they are de­ceiv­ing voices, and wish you evil. But we, brother, we, are good—and wise…

18. What­ever pos­i­tive and valuable mes­sage we re­ceive, we can never be sure that all of this is not a sub­tle and deeply con­cealed threat. This means that in in­ter­stel­lar com­mu­ni­ca­tion there will always be an el­e­ment of dis­trust, and in ev­ery happy rev­e­la­tion, a gnaw­ing sus­pi­cion.

19. A defen­sive pos­ture re­gard­ing in­ter­stel­lar com­mu­ni­ca­tion is only to listen, not send­ing any­thing that does not re­veal its lo­ca­tion. The laws pro­hibit the send­ing of a mes­sage from the United States to the stars. Any­one in the Uni­verse who sends (trans­mits) self-ev­i­dently- is not afraid to show his po­si­tion. Per­haps be­cause the send­ing (for the sender) is more im­por­tant than per­sonal safety. For ex­am­ple, be­cause it plans to flush out prey prior to at­tacks. Or it is forced to, by a evil lo­cal AI.

20. It was said about atomic bomb: The main se­cret about the atomic bomb is that it can be done. If prior to the dis­cov­ery of a chain re­ac­tion Rutherford be­lieved that the re­lease of nu­clear en­ergy is an is­sue for the dis­tant fu­ture, fol­low­ing the dis­cov­ery any physi­cist knows that it is enough to con­nect two sub­crit­i­cal masses of fis­sion­able ma­te­rial in or­der to re­lease nu­clear en­ergy. In other words, if one day we find that sig­nals can be re­ceived from space, it will be an ir­re­versible event—some­thing analo­gous to a deadly new arms race will be on.

Ob­jec­tions.

The dis­cus­sions on the is­sue raise sev­eral typ­i­cal ob­jec­tions, now dis­cussed.

Ob­jec­tion 1: Be­hav­ior dis­cussed here is too an­thro­po­mor­phic. In fact, civ­i­liza­tions are very differ­ent from each other, so you can’t pre­dict their be­hav­ior.

An­swer: Here we have a pow­er­ful ob­ser­va­tion se­lec­tion effect. While a va­ri­ety of pos­si­ble civ­i­liza­tions ex­ist, in­clud­ing such ex­treme sce­nar­ios as think­ing oceans, etc., we can only re­ceive ra­dio sig­nals from civ­i­liza­tions that send them, which means that they have cor­re­spond­ing ra­dio equip­ment and has knowl­edge of ma­te­ri­als, elec­tron­ics and com­put­ing. That is to say we are threat­ened by civ­i­liza­tions of the same type as our own. Those civ­i­liza­tions, which can nei­ther ac­cept nor send ra­dio mes­sages, do not par­ti­ci­pate in this game.

Also, an ob­ser­va­tion se­lec­tion effect con­cerns pur­poses. Goals of civ­i­liza­tions can be very differ­ent, but all civ­i­liza­tions in­tensely send­ing sig­nals, will be only that want to tell some­thing to “ev­ery­one”. Fi­nally, the ob­ser­va­tion se­lec­tion re­lates to the effec­tive­ness and uni­ver­sal­ity of SETI virus. The more effec­tive it is, the more differ­ent civ­i­liza­tions will catch it and the more copies of the SETI virus ra­dio sig­nals will be in heaven. So we have the ‘ex­cel­lent chances’ to meet a most pow­er­ful and effec­tive virus.

Ob­jec­tion 2. For su­per-civ­i­liza­tions there is no need to re­sort to sub­ter­fuge. They can di­rectly con­quer us.

An­swer:

This is true only if they are in close prox­im­ity to us. If move­ment faster than light is not pos­si­ble, the im­pact of mes­sages will be faster and cheaper. Per­haps this differ­ence be­comes im­por­tant at in­ter­galac­tic dis­tances. There­fore, one should not fear the SETI at­tack from the near­est stars, com­ing within a ra­dius of tens and hun­dreds of light-years.

Ob­jec­tion 3. There are lots of rea­sons why SETI at­tack may not be pos­si­ble. What is the point to run an in­effec­tive at­tack?

An­swer: SETI at­tack does not always work. It must act in a suffi­cient num­ber of cases in line with the ob­jec­tives of civ­i­liza­tion, which sends a mes­sage. For ex­am­ple, the con man or fraud­ster does not ex­pect that he would be able “to con” ev­ery vic­tim. He would be happy to steal from even one per­son in­one hun­dred. It fol­lows that SETI at­tack is use­less if there is a goal to at­tack all civ­i­liza­tions in a cer­tain galaxy. But if the goal is to get at least some out­posts in an­other galaxy, the SETI at­tack fits. (Of course, these out­posts can then build fleets of space ships to spread SETI at­tack bases out­ly­ing stars within the tar­get galaxy.)

The main as­sump­tion un­der­ly­ing the idea of SETI at­tacks is that ex­trater­res­trial su­per civ­i­liza­tions ex­ist in the visi­ble uni­verse at all. I think that this is un­likely for rea­sons re­lated to antropic prin­ci­ple. Our uni­verse is unique from 10 ** 500 pos­si­ble uni­verses with differ­ent phys­i­cal prop­er­ties, as sug­gested by one of the sce­nar­ios of string the­ory. My brain is 1 kg out of 10 ** 30 kg in the so­lar sys­tem. Similarly, I sup­pose, the Sun is no more than about 1 out of 10 ** 30 stars that could raise a in­tel­li­gent life, so it means that we are likely alone in the visi­ble uni­verse.

Se­condly the fact that Earth came so late (i.e. it could be here for a few billion years ear­lier), and it was not pre­vented by alien pre­emp­tion from de­vel­op­ing, ar­gues for the rar­ity of in­tel­li­gent life in the Uni­verse. The pu­ta­tive rar­ity of our civ­i­liza­tion is the best pro­tec­tion against at­tack SETI. On the other hand, if we open par­allel wor­lds or su­per light speed com­mu­ni­ca­tion, the prob­lem arises again.

Ob­jec­tion 7. Con­tact is im­pos­si­ble be­tween post-sin­gu­lar­ity su­per­civ­i­liza­tions, which are sup­posed here to be the sender of SETI-sig­nals, and pre- sin­gu­lar­ity civ­i­liza­tion, which we are, be­cause su­per­civ­i­liza­tion is many or­ders of mag­ni­tude su­pe­rior to us, and its mes­sage will be ab­solutely not un­der­stand­able for us—ex­actly as the con­tact be­tween ants and hu­mans is not pos­si­ble. (A sin­gu­lar­ity is the time of cre­ation of ar­tifi­cial in­tel­li­gence ca­pa­ble of learn­ing, (and be­gin­ning an ex­po­nen­tial boot­ing in re­cur­sive im­prov­ing self-de­sign of fur­ther in­tel­li­gence and much else be­sides) af­ter which civ­i­liza­tion make leap in its de­vel­op­ment—on Earth it may be pos­si­ble in the area in 2030.)

An­swer: In the pro­posed sce­nario, we are not talk­ing about con­tact but a pur­pose­ful de­cep­tion of us. Similarly, a man is quite ca­pa­ble of ma­nipu­lat­ing be­hav­ior of ants and other so­cial in­sects, whose ob­jec­tives are is ab­solutely in­com­pre­hen­si­ble to them. For ex­am­ple, LJ user “ivanov-petrov” de­scribes the fol­low­ing scene: As a stu­dent, he stud­ied the be­hav­ior of bees in the Botan­i­cal Gar­den of Moscow State Univer­sity. But he had bad re­la­tions with the se­cu­rity guard con­trol­ling the gar­den, which is reg­u­larly ex­pel­led him be­fore his time. Ivanov-Petrov took the green board and de­vel­oped in bees con­di­tioned re­flex to at­tack this board. The next time the watch­man came, who con­stantly wore a green jer­sey, all the bees at­tacked him and he took to flight. So “ivanov-petrov” could con­tinue re­search. Such ma­nipu­la­tion is not a con­tact, but this does not pre­vent its’ effec­tive­ness.



“Ob­jec­tion 8. For civ­i­liza­tions lo­cated near us is much eas­ier to at­tack us –for ‘guaran­teed re­sults’—us­ing star­ships than with SETI-at­tack.

An­swer. It may be that we sig­nifi­cantly un­der­es­ti­mate the com­plex­ity of an at­tack us­ing star­ships and, in gen­eral, the com­plex­ity of in­ter­stel­lar travel. To list only one fac­tor, the po­ten­tial ‘minefield’ char­ac­ter­is­tics of the as-yet un­known in­ter­stel­lar medium.

If such an at­tack would be car­ried out now or in the past, the Earth’s civ­i­liza­tion has noth­ing to op­pose it, but in the fu­ture the situ­a­tion will change—all mat­ter in the so­lar sys­tem will be full of robots, and pos­si­bly com­pletely pro­cessed by them. On the other hand, the more the speed of en­emy star­ships ap­proach­ing us, the more the fleet will be visi­ble by its brak­ing emis­sions and other char­ac­ter­is­tics. Th­ese quick star­ships would be very vuln­er­a­ble, in ad­di­tion we could pre­pare in ad­vance for its ar­rival. A slowly mov­ing nano- star­ship would be very less visi­ble, but in the case of wish­ing to trig­ger a trans­for­ma­tion of full sub­stance of the so­lar sys­tem, it would sim­ply be nowhere to land (at least with­out start­ing an alert in such a ‘nan­otech-set­tled’ and fully used fu­ture so­lar sys­tem. (Fried­lan­der added: Pre­sum­ably there would always be some ‘outer edge’ of thinly set­tled Oort Cloud sort of mat­ter, but by defi­ni­tion the rest of the sys­tem would be more densely set­tled, en­ergy rich and any deeper pen­e­tra­tion into so­lar space and its’ con­quest would be the prover­bial up­hill bat­tle—not in terms of grav­ity gra­di­ent, but in terms of the available re­sources of war against a full Class 2 Kar­da­shev civ­i­liza­tion.)

The most se­ri­ous ob­jec­tion is that an ad­vanced civ­i­liza­tion could in a few mil­lion years sow all our galaxy with self repli­cat­ing post sin­gu­lar­ity nanobots that could achieve any goal in each tar­get star-sys­tem, in­clud­ing easy pre­ven­tion of the de­vel­op­ment of in­cip­i­ent other civ­i­liza­tions. (In the USA Frank Ti­pler ad­vanced this line of rea­son­ing.) How­ever, this could not have hap­pened in our case—no one has pre­vented de­vel­op­ment of our civ­i­liza­tion. So, it would be much eas­ier and more re­li­able to send out robots with such as­sign­ments, than bom­bard­ment of SETI mes­sages of the en­tire galaxy, and if we don’t see it, it means that no SETI at­tacks are in­side our galaxy. (It is pos­si­ble that a probe on the out­skirts of the so­lar sys­tem ex­pects man­i­fes­ta­tions of hu­man space ac­tivity to at­tack – a var­i­ant of the “Berserker” hy­poth­e­sis—but it will not at­tack through SETI). Prob­a­bly for many mil­lions or even billions of years micro­robots could even reach from dis­tant galax­ies at a dis­tance of tens of mil­lions of light-years away. Ra­di­a­tion dam­age may limit this how­ever with­out reg­u­lar self-re­build­ing.

In this case SETI at­tack would be mean­ingful only at large dis­tances. How­ever, this dis­tance—tens and hun­dreds of mil­lions of light-years—prob­a­bly will re­quire in­no­va­tive meth­ods of mod­u­la­tion sig­nals, such as man­age­ment of the lu­mi­nes­cence of ac­tive nu­clei of galax­ies. Or trans­fer a nar­row beam in the di­rec­tion of our galaxy (but they do not know where it will be over mil­lions of years). But a civ­i­liza­tion, which can man­age its’ galaxy’s nu­cleus, might cre­ate a space­ship fly­ing with near-light speeds, even if its mass is a mass of the planet. Such con­sid­er­a­tions severely re­duce the like­li­hood of SETI at­tacks, but not lower it to zero, be­cause we do not know all the pos­si­ble ob­jec­tives and cir­cum­stances.


(An com­ment by JF :For ex­am­ple the lack of SETI-at­tack so far may it­self be a cun­ning ploy: At first re­ceipt of the de­vel­op­ing So­lar civ­i­liza­tion’s ra­dio sig­nals, all in­ter­stel­lar ‘spam’ would have ceased, (and in­terfer­ence sta­tions of some un­known (but amaz­ing) ca­pa­bil­ity and type set up around the So­lar Sys­tem to block all com­ing sig­nals rec­og­niz­able to its’ com­put­ers as of in­tel­li­gent ori­gin,) in or­der to get us ‘lonely’ and give us time to dis­cover and ap­pre­ci­ate the Fermi Para­dox and even get those so philo­soph­i­cally in­clined to de­spair des­per­ate that this means the Uni­verse is ap­par­ently hos­tile by some stan­dards. Then, when des­per­ate, we sud­denly dis­cover, slowly at first, par­tially at first, and then with more and more won­der­ful sig­nals, the fact that space is filled with bright en­tic­ing sig­nals (like spam). The block­ade, cun­ning as it was (analo­gous to Earthly jam­ming sta­tions) was yet a pre­lude to a slow ‘turn­ing up’ of pre­planned in­trigu­ing sig­nal traf­fic. If as Earth had de­vel­oped we had in­ter­cepted cun­ning spam fol­lowed by the ag­o­nized ‘don’t re­peat our mis­takes’ fi­nal mes­sages of tricked and dy­ing civ­i­liza­tions, only a fool would heed the en­tic­ing voices of SETI spam. But now, a SETI at­tack may benefit from the slow un­mask­ing of a cun­ning mas­quer­ade as first a faint and dis­tant light of in­finite won­der, only at the end re­vealed as the head­light of an on­rush­ing cos­mic train…)

AT com­ment to it. In fact I think that SETI at­tack senders are on the dis­tances more than 1000 ly and so they do not know yet that we have ap­peared. But so called Fermi Para­dox in­deed maybe a trick – senders de­liber­ately made their sig­nals weak in or­der to make us think that they are not spam.

The scale of space strat­egy may be in­con­ceiv­able to the hu­man mind.



And we should note in con­clu­sion that some types of SETI-at­tack do not even need a com­puter but just a man who could un­der­stand the mes­sage that then “set his mind on fire”. At the mo­ment we can­not imag­ine such a mes­sage, but we can give some analo­gies. Western re­li­gions are built around the text of the Bible. It can be as­sumed that if the text of the Bible ap­peared in some coun­tries, which had pre­vi­ously not been fa­mil­iar with it, there might arise a cer­tain num­ber of bibli­cal be­liev­ers. Similarly sub­ver­sive poli­ti­cal liter­a­ture, or even some su­perideas, “sticky” memes or philo­soph­i­cal mind-ben­ders. Or, as sug­gested by Hans Mo­ravec, we get such a mes­sage: “Now that you have re­ceived and de­coded me, broad­cast me in at least ten thou­sand di­rec­tions with ten mil­lion watts of power. Or else.”—this mes­sage is dropped, leav­ing us guess­ing, what may in­di­cate that “or else”. Even a few pages of text may con­tain a lot of sub­ver­sive in­for­ma­tion—Imag­ine that we could send a mes­sage to the 19 th cen­tury sci­en­tists. We could open them to the gen­eral prin­ci­ple of the atomic bomb, the the­ory of rel­a­tivity, the tran­sis­tors—and thus com­pletely change the course of tech­nolog­i­cal his­tory, and we could add that all the ills in the 20 cen­tury were from Ger­many (which is only partly true) , then we would have in­fluenced the poli­ti­cal his­tory.

(Com­ment of JF: Such a lat­ter us­age would de­pend on hav­ing re­ceived enough of Earth’s trans­mis­sions to be able to model our be­hav­ior and poli­tics. But imag­ine a mes­sage as pos­ing from our own fu­ture, to ig­nite ‘cat­alytic war’—Au­to­mated SIGINT (sig­nals in­tel­li­gence) sta­tions are con­structed mon­i­tor­ing our so­lar sys­tem, their com­put­ers ‘crack­ing’ our lan­guage and cul­ture (pos­si­bly with the aid of chil­dren’s tele­vi­sion pro­grams with see and say match­ing of let­ters and sounds, from TV news show­ing world maps and nam­ing coun­tries pos­si­bly even from in­ter­cept­ing wire­less in­ter­net en­cy­clo­pe­dia ar­ti­cles. ) Then a test or two may fol­low, post­ing a what if sce­nario invit­ing com­ment from blog­gers, about a fu­ture war say be­tween the two lead­ing pow­ers of the planet. (For pur­poses of this dis­cus­sion, say around 2100 pre­sent cal­en­dar China is strongest and In­dia ris­ing fast). Any defects and nit­picks in the com­ments of the blog are noted and cor­rected. Fi­nally, an ac­tual in­ter­stel­lar mes­sage is sent with the de­bugged sce­nario(not shift­ing against the stel­lar back­ground, it is un­ques­tion­ably in­ter­stel­lar in ori­gin) pro­port­ing to be from a dy­ing star­ship of the presently stronger side’s (China’s) fu­ture, when the presently weaker side (In­dia’s) space fleet has smashed the fu­ture ver­sion of the Chi­nese State and es­sen­tially com­mit­ted geno­cide. The star­ship has come back in time, but is dy­ing, and in­deed the trans­mis­sion ends, or sim­ply re­peats, pos­si­bly af­ter some back and forth com­mu­ni­ca­tion be­tween the false com­puter mod­els of the ‘star­ship com­man­der’ and the Chi­nese gov­ern­ment. The reader can imag­ine the urg­ings of the fu­ture Chi­nese mil­i­tary coun­cil to pre­empt to fore­stall doom. If as seems prob­a­ble, such a strat­egy is too com­pli­cated to carry off in one stage, var­i­ous ‘fu­ture trav­el­lers’ may emerge from a war, sig­nal for help in vain, and ‘die’ far out­side our abil­ity to reach them, (say some light days away, near the alleged lo­ca­tion of an ‘emer­gence gate’ but near an ac­tual trans­mit­ter) Quite a drama may emerge as the com­puter learns to ‘play’ us like a con man, ship af­ter ship of var­i­ous na­tion­al­ities drib­bling out sto­ries but also get­ting an­swers to key ques­tions for aid in con­struct­ing the emerg­ing sce­nario which will be fright­en­ingly be­liev­able, enough to ig­nite a fi­nal war. Pos­si­bly lists of key peo­ple in China (or what­ever side is stronger) may be drawn up by the com­puter with a de­mand that they be ex­e­cuted as the par­ents of fu­ture war crim­i­nals—sort of an In­ter­na­tional Crim­i­nal Court –act­ing as Ter­mi­na­tor sce­nario. Nat­u­rally the Chi­nese state, at that time the most pow­er­ful in the world, would guard its’ rulers lives against any threat. Yet more re­fugee space­ships of var­i­ous na­tion­al­ities can emerge trans­mit and die, offer­ing their own mil­i­taries ter­rify­ing new weapons tech­nolo­gies from un­known sci­ences that re­ally work (more ‘proof’ of their fu­ture ori­gin). Or weapons from known sci­ences, for ex­am­ple de­cod­ing on­line DNA se­quences in the fu­ture in­ter­net and con­struct­ing for­mu­lae for DNA con­struc­tors to make spe­cific tai­lored ge­netic weapons against par­tic­u­lar pop­u­la­tions—that en­dure in the ground, a scorched earth against a par­tic­u­lar pop­u­la­tion on a par­tic­u­lar piece of land. Th­ese are copied and spread wor­ld­wide as are to­tally ac­cu­rate plans—in stan­dard CNC codes for easy to con­struct ther­monu­clear weapons in the 1950s style—us­ing U-238 for cas­ing, and only a few kilo­grams of fis­sion­able ma­te­rial for ig­ni­tion By that time well over a mil­lion tons of de­pleted ura­nium will be wor­ld­wide, and deu­terium is free in the ocean and can be used di­rectly in very large weapons with­out lithium deu­teride. Know­ing how to hack to­gether a waste­ful, more than crit­i­cal mass crude fis­sion de­vice is one thing (the South Afri­can de­vice was of this kind). But know­ing –with ab­solute ac­cu­racy, down to ma­chin­ing draw­ings, CNC codes, etc how to make high-yield, su­per effi­cient very dirty ther­monu­clear weapons with­out need for test­ing means that any small group with a few dozen mil­lion dol­lars and au­to­mated ma­chine tools can clan­des­tinely make a multi-mega­ton de­vice –or many— and smash the largest cities. And any small power with a few dozen jets can crip­ple a con­ti­nent for a decade. Already over a thou­sand tons of plu­to­nium ex­ist. The SETI spam can in­clude CNC codes for mak­ing a one shot re­ac­tor plu­to­nium chem­i­cal re­finer that would be left hope­lessly ra­dioac­tive but out­put chem­i­cally pure plu­to­nium. (This would be prone to pre­deto­na­tion be­cause of the Pu-240 con­tent but then plans for de­bugged laser iso­tope sep­a­ra­tors may also be down­loaded). This is a var­i­ant of the ‘cat­alytic war’ and ‘nu­clear six gun’ (i.e. easy to ob­tain weapons) sce­nar­ios of the late Her­man Kahn. Even cheaper would be bioat­tacks of the kind out­lined above. The prin­ci­ple point is that planet kil­ler weapons fully de­bugged take great amounts of de­bug­ging, tens to hun­dreds of billions of dol­lars, and free ac­cess to a world sci­en­tific com­mu­nity. To­day, it is to ev­ery great power’s ad­van­tage to keep ac­cu­rate de­signs out of the hands of third par­ties be­cause they have to live on the same planet (and be­cause the fewer weapons, the eas­ier it is to stay a great power). Not so the SETI spam au­thors. Without the hun­dreds of billions in R and D, the ac­tual con­struc­tion bud­get would be on the or­der of a mil­lion dol­lars per multi-mega­ton de­vice (de­pend­ing on the ex­pense of ob­tain­ing the raw re­ac­tor plu­to­nium) If wish­ing to ex­tend to­day’s sce­nar­ios into the fu­ture, the SETI spam au­thors ma­nipu­late Ge­or­gia (with about a $10 billion GDP) to arm against Rus­sia and Taiwan against China and Venezuela against the USA. Although Rus­sian and China and the USA could re­spec­tively promise an­nihila­tion against any at­tacker, with a mil­i­tary bud­get around 4% of GDP and the down­loaded plans, the re­verse—for the first time—could then also be true. (400 100 mega­ton bombs can kill by fal­lout per­haps 95% of un­pro­tected pop­u­la­tions over a coun­try the size of the USA or China and 90% of a coun­try the size of Rus­sia, as­sum­ing the worst kind of co­op­er­a­tion from the winds.—from an old chart by Ralph Lapp) Any­one liv­ing near a su­per­armed microstate with bor­der con­flicts will, of course, wish to arm them­selves. And these newly armed states them­selves—of course—will have bor­ders. Note that this drawn out sce­nario gives lots of time for a huge arms buildup on both (or many!) sides, and a Se­cond Cold War that even­tu­ally turns very hot in­deed…and un­like a hu­man player of such a hor­rific ‘cat­alytic war’ con game, wor­ld­wide fal­lout or en­dur­ing bio­con­tam­i­na­tion is not a con­cern at all… ()

Con­clu­sion.

The product of the prob­a­bil­ities of the fol­low­ing events de­scribes the prob­a­bil­ity of at­tack. For these prob­a­bil­ities, we can only give so-called «ex­pert» as­sess­ment, that is, as­sign them a cer­tain a pri­ori sub­jec­tive prob­a­bil­ity as we do now.

1) The like­li­hood that ex­trater­res­trial civ­i­liza­tions ex­ist at a dis­tance at which ra­dio com­mu­ni­ca­tion is pos­si­ble with them. In gen­eral, I agree with the view of Shk­lovsky and sup­port­ers of the “Rare Earth” hy­poth­e­sis—that the Earth’s civ­i­liza­tion is unique in the ob­serv­able uni­verse. This does not mean that ex­trater­res­trial civ­i­liza­tions do not ex­ist at all (be­cause the uni­verse, ac­cord­ing to the the­ory of cos­molog­i­cal in­fla­tion, is al­most end­less) - they are just over the hori­zon of events visi­ble from our point in space-time. In ad­di­tion, this is not just about dis­tance, but also of the dis­tance at which you can es­tab­lish a con­nec­tion, which al­lows trans­fer­ring gi­gabytes of in­for­ma­tion. (How­ever, pass­ing even 1 bit per sec­ond, you can sub­mit 1-gi­gabit for about 20 years, which may be suffi­cient for the SETI-at­tack.) If in the fu­ture will be pos­si­ble some su­per­lu­mi­nal com­mu­ni­ca­tion or in­ter­ac­tion with par­allel uni­verses, it would dra­mat­i­cally in­crease the chances of SETI at­tacks. So, I ap­pre­ci­ate this chance to 10%.

2) The prob­a­bil­ity that SETI-at­tack is tech­ni­cally fea­si­ble: that is, it is pos­si­ble com­puter pro­gram, with re­cur­sively self-im­prov­ing AI and sizes suit­able for ship­ping. I see this chance as high: 90%.

3) The like­li­hood that civ­i­liza­tions that could have car­ried out such at­tack ex­ist in our space-time cone—this prob­a­bil­ity de­pends on the den­sity of civ­i­liza­tions in the uni­verse, and of whether the per­centage of civ­i­liza­tions that choose to ini­ti­ate such an at­tack, or, more im­por­tantly, ob­tain vic­tims and be­come re­peaters. In ad­di­tion, it is nec­es­sary to take into ac­count not only the den­sity of civ­i­liza­tions, but also the den­sity cre­ated by ra­dio sig­nals. All these fac­tors are highly un­cer­tain. It is there­fore rea­son­able to as­sign this prob­a­bil­ity to 50%.

4) The prob­a­bil­ity that we find such a sig­nal dur­ing our ris­ing civ­i­liza­tion’s pe­riod of vuln­er­a­bil­ity to it. The pe­riod of vuln­er­a­bil­ity lasts from now un­til the mo­ment when we will de­cide and be tech­ni­cally ready to im­ple­ment this de­ci­sion: Do not down­load any ex­trater­res­trial com­puter pro­grams un­der any cir­cum­stances. Such a de­ci­sion may only be ex­er­cised by our AI, in­stalled as world ruler (which in it­self is fraught with con­sid­er­able risk). Such an world AI (WAI) can be in cre­ated circa 2030. We can­not ex­clude, how­ever, that our WAI still will not im­pose a ban on the in­take of ex­trater­res­trial mes­sages, and fall vic­tim to at­tacks by the alien ar­tifi­cial in­tel­li­gence, which by mil­lions of years of ma­chine evolu­tion sur­passes it. Thus, the win­dow of vuln­er­a­bil­ity is most likely about 20 years, and “width” of the win­dow de­pends on the in­ten­sity of searches in the com­ing years. This “width” for ex­am­ple, de­pends on the in­ten­sity of the cur­rent eco­nomic crisis of 2008-2010, from the risks of World War III, and how all this will af­fect the emer­gence of the WAI. It also de­pends on the den­sity of in­fected civ­i­liza­tions and their sig­nal strength— as these fac­tors in­crease, the more chances to de­tect them ear­lier. Be­cause we are a nor­mal civ­i­liza­tion un­der nor­mal con­di­tions, ac­cord­ing to the prin­ci­ple of Coper­ni­cus, the prob­a­bil­ity should be large enough; oth­er­wise a SETI-at­tack would have been gen­er­ally in­effec­tive. (The SETI-at­tack, it­self (here sup­posed to ex­ist) also are sub­ject to a form of “nat­u­ral se­lec­tion” to test its effec­tive­ness. (In the sense that it works or does not. ) This is a very un­cer­tain chance we will too, over 50%.

5) Next is the prob­a­bil­ity that SETI-at­tack will be suc­cess­ful—that is that we swal­low the bait, down­load the pro­gram and de­scrip­tion of the com­puter, run them, lose con­trol over them and let them reach all their goals. I ap­pre­ci­ate this chance to be very high be­cause of the fac­tor of mul­ti­plic­ity—that is the fact that the mes­sage is down­loaded re­peat­edly, and some­one, sooner or later, will start it. In ad­di­tion, through nat­u­ral se­lec­tion, most likely we will get the most effec­tive and deadly mes­sage that will most effec­tively de­ceive our type of civ­i­liza­tion. I con­sider it to be 90%.

6) Fi­nally, it is nec­es­sary to as­sess the prob­a­bil­ity that SETI-at­tack will lead to a com­plete hu­man ex­tinc­tion. On the one hand, it is pos­si­ble to imag­ine a “good” SETI-at­tack, which is limited so that it will cre­ate a pow­er­ful ra­dio emit­ter be­hind the or­bit of Pluto. How­ever, for such a pro­gram will always ex­ist the risk that a pos­si­ble emer­gent so­ciety at its’ tar­get star will cre­ate a pow­er­ful ar­tifi­cial in­tel­li­gence, and effec­tive weapon that would de­stroy this emit­ter. In ad­di­tion, to cre­ate the most pow­er­ful transpon­der would be needed all the sub­stance of so­lar sys­tem and the en­tire so­lar en­ergy. Con­se­quently, the share of such “good” at­tacks will be lower due to nat­u­ral se­lec­tion, as well as some of them will be de­stroyed sooner or later by cap­tured by them civ­i­liza­tions and their sig­nals will be weaker. So the chances of de­stroy­ing all the peo­ple with the help of SETI-at­tack that has reached all its goals, I ap­pre­ci­ate in 80%.

As a re­sult, we have: 0.1h0.9h0.5h0.5h0.9h0.8 = 1.62%

So, af­ter round­ing, the chances of ex­tinc­tion of Man through SETI at­tack in XXI cen­tury is around 1 per cent with a the­o­ret­i­cal pre­ci­sion of an or­der of mag­ni­tude.

Our best pro­tec­tion in this con­text would be that civ­i­liza­tion would very rarely met in the Uni­verse. But this is not quite right, be­cause the Fermi para­dox here works on the prin­ci­ple of “Nei­ther al­ter­na­tive is good”:

  • If there are ex­trater­res­trial civ­i­liza­tions, and there are many of them, it is dan­ger­ous be­cause they can threaten us in one way or an­other.

  • If ex­trater­res­trial civ­i­liza­tions do not ex­ist, it is also bad, be­cause it gives weight to the hy­poth­e­sis of in­evitable ex­tinc­tion of tech­nolog­i­cal civ­i­liza­tions or of our un­der­es­ti­mat­ing of fre­quency of cos­molog­i­cal catas­tro­phes. Or, a high den­sity of space haz­ards, such as gamma-ray bursts and as­ter­oids that we un­der­es­ti­mate be­cause of the ob­ser­va­tion se­lec­tion effect—i.e., were we not here be­cause already kil­led, we would not be mak­ing these ob­ser­va­tions….

The­o­ret­i­cally pos­si­ble is a re­verse op­tion, which is that through SETI will come a warn­ing mes­sage about a cer­tain threat, which has de­stroyed most civ­i­liza­tions, such as: “Do not do any ex­per­i­ments with X par­ti­cles, it could lead to an ex­plo­sion that would de­stroy the planet.” But even in that case re­main a doubt, that there is no de­cep­tion to de­prive us of cer­tain tech­nolo­gies. (Proof would be if similar re­ports came from other civ­i­liza­tions in space in the op­po­site di­rec­tion.) But such com­mu­ni­ca­tion may only en­hance the temp­ta­tion to ex­per­i­ment with X-par­ti­cles.

So I do not ap­peal to aban­don SETI searches, al­though such ap­peals are use­less.

It may be use­ful to post­pone any tech­ni­cal re­al­iza­tion of the mes­sages that we could get on SETI, up un­til the time when we will have our Ar­tifi­cial In­tel­li­gence. Un­til that mo­ment, per­haps, is only 10-30 years, that is, we could wait. Se­condly, it would be im­por­tant to hide the fact of re­ceiv­ing dan­ger­ous SETI sig­nal its essence and the source lo­ca­tion.

This risk is re­lated to a method­olog­i­cally in­ter­est­ing as­pect. De­spite the fact that I have thought ev­ery day in the last year and read on the topic of global risks, I found this dan­ger­ous vuln­er­a­bil­ity in SETI only now. By hind­sight, I was able to find an­other four au­thors who came to similar con­clu­sions. How­ever, I have made a sig­nifi­cant find­ing: that there may be not yet open global risks, and even if the risk of cer­tain con­stituent parts are sep­a­rately known to me, it may take a long time to join them into a co­her­ent pic­ture. Thus, hun­dreds of dan­ger­ous vuln­er­a­bil­ities may sur­round us, like an un­known minefield. Only when the first ex­plo­sion hap­pens will we know. And that first ex­plo­sion may be the last.

An in­ter­est­ing ques­tion is whether Earth it­self could be­come a source of SETI-at­tack in the fu­ture when we will have our own AI. Ob­vi­ously, that could. Already in the pro­gram of METI ex­ists an idea to send the code of hu­man DNA. (The “chil­dren’s mes­sage sce­nario” – in which the chil­dren ask to take their piece of DNA and clone them on an­other planet –as de­picted in the film “Cal­ling all aliens”.)

Liter­a­ture:

1. Hoyle F. An­dromeda. http://​​en.wikipe­dia.org/​​wiki/​​A_for_Andromeda

2. Yud­kowsky E. Ar­tifi­cial In­tel­li­gence as a Pos­i­tive and Nega­tive Fac­tor in Global Risk. Forth­com­ing in Global Catas­trophic Risks, eds. Nick Bostrom and Milan Cirkovic http://​​www.sin­ginst.org/​​up­load/​​ar­tifi­cial-in­tel­li­gence-risk.pdf

3.Mo­ravec Hans. Mind Chil­dren: The Fu­ture of Robot and Hu­man In­tel­li­gence, 1988.

4.Car­ri­gan, Jr. Richard A. The Ul­ti­mate Hacker: SETI sig­nals may need to be de­con­tam­i­nated http://​​home.fnal.gov/​​~car­ri­gan/​​SETI/​​SETI%20De­con%20Aus­tralia%20poster%20pa­per.pdf

5. Car­ri­gan’s page http://​​home.fnal.gov/​​~car­ri­gan/​​SETI/​​SETI_Hacker.htm