Risks of downloading alien AI via SETI search

Alexei Turchin. Risks of downloading alien AI via SETI search

Abstract: This article examines risks associated with the program of passive search for alien signals (SETI—the Search for Extra-Terrestrial Intelligence). In this paper we propose a scenario of possible vulnerability and discuss the reasons why the proportion of dangerous signals to harmless ones can be dangerously high. This article does not propose to ban SETI programs, and does not insist on the inevitability of SETI-triggered disaster. Moreover, it gives the possibility of how SETI can be a salvation for mankind.

The idea that passive SETI can be dangerous is not new. Fred Hoyle suggested in the story “A for Andromeda” a scheme of alien attack through SETI signals. According to the plot, astronomers receive an alien signal, which contains a description of a computer and a computer program for it. This machine creates a description of the genetic code which leads to the creation of an intelligent creature – a girl dubbed Andromeda, which, working together with the computer, creates advanced technology for the military. The initial suspicion of alien intent is overcome by the greed for the technology the aliens can provide. However, the main characters realize that the computer acts in a manner hostile to human civilization and destroy the computer, and the girl dies.

This scenario is fiction, because most scientists do not believe in the possibility of a strong AI, and, secondly, because we do not have the technology that enables synthesis of new living organisms solely from its’ genetic code. Or at least, we have not until recently. Current technology of sequencing and DNA synthesis, as well as progress in developing a code of DNA modified with another set of the alphabet, indicate that in 10 years the task of re-establishing a living being from computer codes sent from space in the form computer codes might be feasible.

Hans Moravec in the book “Mind Children” (1988) offers a similar type of vulnerability: downloading a computer program from space via SETI, which will have artificial intelligence, promising new opportunities for the owner and after fooling the human host, self-replicating by the millions of copies and destroying the human host, finally using the resources of the secured planet to send its ‘child’ copies to multiple planets which constitute its’ future prey. Such a strategy would be like a virus or a digger wasp—horrible, but plausible. In the same direction are R. Carrigan’s ideas; he wrote an article “SETI-hacker”, and expressed fears that unfiltered signals from space are loaded on millions of not secure computers of SETI-at-home program. But he met tough criticism from programmers who pointed out that, first, data fields and programs are in divided regions in computers, and secondly, computer codes, in which are written programs, are so unique that it is impossible to guess their structure sufficiently to hack them blindly (without prior knowledge).

After a while Carrigan issued a second article—“Should potential SETI signals be decontaminated?” http://​​home.fnal.gov/​​~carrigan/​​SETI/​​SETI%20Decon%20Australia%20poster%20paper.pdf, which I’ve translated into Russian. In it, he pointed to the ease of transferring gigabytes of data on interstellar distances, and also indicated that the interstellar signal may contain some kind of bait that will encourage people to collect a dangerous device according to the designs. Here Carrigan not give up his belief in the possibility that an alien virus could directly infected earth’s computers without human ‘translation’ assistance. (We may note with passing alarm that the prevalence of humans obsessed with death—as Fred Saberhagen pointed out in his idea of ‘goodlife’—means that we cannot entirely discount the possibility of demented ‘volunteers’ –human traitors eager to assist such a fatal invasion) As a possible confirmation of this idea, Carrigan has shown that it is possible easily reverse engineer language of computer program—that is, based on the text of the program it is possible to guess what it does, and then restore the value of operators.

In 2006, E. Yudkowsky wrote an article “AI as a positive and a negative factor of global risk”, in which he demonstrated that it is very likely that it is possible rapidly evolving universal artificial intelligence which high intelligence would be extremely dangerous if it was programmed incorrectly, and, finally, that the occurrence of such AI and the risks associated with it significantly undervalued. In addition, Yudkowsky introduced the notion of “Seed AI”—embryo AI—that is a minimum program capable of runaway self-improvement with unchanged primary goal. The size of Seed AI can be on the close order of hundreds of kilobytes. (For example, a typical representative of Seed AI is a human baby, whose part of genome responsible for the brain would represent ~ 3% of total genes of a person with a volume of 500 megabytes, or 15 megabytes, but given the share of garbage DNA is even less.)

In the beginning, let us assume that in the Universe there is an extraterrestrial civilization, which intends to send such a message, which will enable it to obtain power over Earth, and consider this scenario. In the next chapter we will consider how realistic is that another civilization would want to send such a message.

First, we note that in order to prove the vulnerability, it is enough to find just one hole in security. However, in order to prove safety, you must remove every possible hole. The complexity of these tasks varies on many orders of magnitude that are well known to experts on computer security. This distinction has led to the fact that almost all computer systems have been broken (from Enigma to iPOD). I will now try to demonstrate one possible, and even, in my view, likely, vulnerability of SETI program. However, I want to caution the reader from the thought that if he finds errors in my discussions, it automatically proves the safety of SETI program. Secondly, I would also like to draw the attention of the reader, that I am a man with an IQ of 120 who spent all of a month of thinking on the vulnerability problem. We need not require an alien super civilization with IQ of 1000000 and contemplation time of millions of years to significantly improve this algorithm—we have no real idea what an IQ of 300 or even-a mere IQ of 100 with much larger mental ‘RAM’ (–the ability to load a major architectural task into mind and keep it there for weeks while processing) could accomplish to find a much more simple and effective way. Finally, I propose one possible algorithm and then we will discuss briefly the other options.

In our discussions we will draw on the Copernican principle, that is, the belief that we are ordinary observers in normal situations. Therefore, the Earth’s civilization is an ordinary civilization developing normally. (Readers of tabloid newspapers may object!)

Algorithm of SETI attack

1. The sender creates a kind of signal beacon in space, which reveals that its message is clearly artificial. For example, this may be a star with a Dyson sphere, which has holes or mirrors, alternately opened and closed. Therefore, the entire star will blink of a period of a few minutes—faster is not possible because of the variable distance between different openings. (Even synchronized with an atomic clock according to a rigid schedule, the speed of light limit means that there are limits to the speed and reaction time of coordinating large scale systems) Nevertheless, this beacon can be seen at a distance of millions of light years. There are possible other types of lighthouses, but the important fact that the beacon signal could be viewed at long distances.

2. Nearer to Earth is a radio beacon with a much weaker signal, but more information saturated. The lighthouse draws attention to this radio source. This source produces some stream of binary information (i.e. the sequence of 0 and 1). About the objection that the information would contain noises, I note that the most obvious (understandable to the recipient’s side) means to reduce noise is the simple repetition of the signal in a circle.

3. The most simple way to convey meaningful information using a binary signal is sending of images. First, because eye structures in the Earth’s biological diversity appeared independently 7 times, it means that the presentation of a three-dimensional world with the help of 2D images is probably universal, and is almost certainly understandable to all creatures who can build a radio receiver.

4. Secondly, the 2D images are not too difficult to encode in binary signals. To do so, let us use the same system, which was used in the first TV cameras, namely, a system of progressive and frame rate. At the end of each time frame images store bright light, repeated after each line, that is, through an equal number of bits. Finally, at the end of each frame is placed another signal indicating the end of the frame, and repeated after each frame. (This may form, or may not form a continuous film.) This may look like this:

01010111101010 11111111111111111

01111010111111 11111111111111111

11100111100000 11111111111111111

Here is the end line signal of every of 25 units. Frame end signal may appear every, for example, 625 units.

5. Clearly, a sender civilization- should be extremely interested that we understand their signals. On the other hand, people will share an extreme desire to decrypt the signal. Therefore, there is no doubt that the picture will be recognized.

6. Using images and movies can convey a lot of information, they can even train in learning their language, and show their world. It is obvious that many can argue about how such films will be understandable. Here, we will focus on the fact that if a certain civilization sends radio signals, and the other takes them, so they have some shared knowledge. Namely, they know radio technique—that is they know transistors, capacitors, and resistors. These radio-parts are quite typical so that they can be easily recognized in the photographs. (For example, parts shown, in cutaway view, and in sequential assembly stages— or in an electrical schematic whose connections will argue for the nature of the components involved).

7. By sending photos depicting radio-parts on the right side, and on the left—their symbols, it is easy to convey a set of signs indicating electrical circuit. (Roughly the same could be transferred and the logical elements of computers.)

8. Then, using these symbols the sender civilization- transmits blueprints of their simplest computer. The simplest of computers from hardware point of view is the Post-machine. It has only 6 commands and a tape data recorder. Its full electric scheme will contain only a few tens of transistors or logic elements. It is not difficult to send blueprints of Post machine.

9. It is important to note that all computers at the level of algorithms are Turing-compatible. That means that extraterrestrial computers at the basic level are compatible with any earth computer. Turing-compatibility is a mathematical universality as the Pythagorean theorem. Even the Babbage mechanical machine, designed in the early 19th century, was Turing-compatible.

10. Then the sender civilization- begins to transmit programs for that machine. Despite the fact that the computer is very simple, it can implement a program of any difficulty, although it will take very long in comparison with more complex programs for the same computer. It is unlikely that people will be required to build this computer physically. They can easily emulate it within any modern computer, so that it will be able to perform trillions of operations per second, so even the most complex program will be carried out on it quite quickly. (It is a possible interim step: a primitive computer gives a description of a more complex and fast computer and then run on it.)

11. So why people would create this computer, and run its program? Perhaps, in addition to the actual computer schemes and programs in the communication must be some kind of “bait”, which would have led the people to create such an alien computer and to run programs on it and to provide to it some sort of computer data about the external world –Earth outside the computer. There are two general possible baits—temptations and dangers:

a). For example, perhaps people receive the following offer– lets call it “The humanitarian aid con (deceit)”. Senders of an “honest signal” SETI message warn that the sent program is Artificial intelligence, but lie about its goals. That is, they argue that this is a “gift” which will help us to solve all medical and energy problems. But it is a Trojan horse of most malevolent intent. It is too useful not to use. Eventually it becomes indispensable. And then exactly when society becomes dependent upon it, the foundation of society—and society itself—is overturned…

b). “The temptation of absolute power con”—in this scenario, they offer specific transaction message to recipients, promising power over other recipients. This begins a ‘race to the bottom’ that leads to runaway betrayals and power seeking counter-moves, ending with a world dictatorship, or worse, a destroyed world dictatorship on an empty world….

c). “Unknown threat con”—in this scenario bait senders report that a certain threat hangs over on humanity, for example, from another enemy civilization, and to protect yourself, you should join the putative “Galactic Alliance” and build a certain installation. Or, for example, they suggest performing a certain class of physical experiments on the accelerator and sending out this message to others in the Galaxy. (Like a chain letter) And we should send this message before we ignite the accelerator, please…

d). “Tireless researcher con”—here senders argue that posting messages is the cheapest way to explore the world. They ask us to create AI that will study our world, and send the results back. It does rather more than that, of course…

12. However, the main threat from alien messages with executable code is not the bait itself, but that this message can be well known to a large number of independent groups of people. First, there will always be someone who is more susceptible to the bait. Secondly, say, the world will know that alien message emanates from the Andromeda galaxy, and the Americans have already been received and maybe are trying to decipher it. Of course, then all other countries will run to build radio telescopes and point them on Andromeda galaxy, as will be afraid to miss a “strategic advantage”. And they will find the message and see that there is a proposal to grant omnipotence to those willing to collaborate. In doing so, they will not know, if the Americans would take advantage of them or not, even if the Americans will swear that they don’t run the malicious code, and beg others not to do so. Moreover, such oaths, and appeals will be perceived as a sign that the Americans have already received an incredible extraterrestrial advantage, and try to deprive “progressive mankind” of them. While most will understand the danger of launching alien code, someone will be willing to risk it. Moreover there will be a game in the spirit of “winner take all”, as well be in the case of opening AI, as Yudkowsky shows in detail. So, the bait is not dangerous, but the plurality of recipients. If the alien message is posted to the Internet (and its size, sufficient to run Seed AI can be less than gigabytes along with a description of the computer program, and the bait), here we have a classic example of “knowledge” of mass destruction, as said Bill Joy, meaning the recipes genomes of dangerous biological viruses. If aliens sent code will be available to tens of thousands of people, then someone will start it even without any bait out of simple curiosity We can’t count on existing SETI protocols, because discussion on METI (sending of messages to extraterrestrial) has shown that SETI community is not monolithic on important questions. Even a simple fact that something was found could leak and encourage search from outsiders. And the coordinates of the point in sky would be enough.

13. Since people don’t have AI, we almost certainly greatly underestimate its power and overestimate our ability to control it. The common idea is that “it is enough to pull the power cord to stop an AI” or place it in a black box to avoid any associated risks. Yudkowsky shows that AI can deceive us as an adult does a child. If AI dips into the Internet, it can quickly subdue it as a whole, and also taught all necessary about entire earthly life. Quickly—means the maximum hours or days. Then the AI can create advanced nanotechnology, buy components and raw materials (on the Internet, he can easily make money and order goods with delivery, as well as to recruit people who would receive them, following the instructions of their well paying but ‘unseen employer’, not knowing who—or rather, what—- they are serving). Yudkowsky leads one of the possible scenarios of this stage in detail and assesses that AI needs only weeks to crack any security and get its own physical infrastructure.

“Consider, for clarity, one possible scenario, in which Alien AI (AAI) can seize power on the Earth. Assume that it promises immortality to anyone who creates a computer on the blueprints sent to him and start the program with AI on that computer. When the program starts, it says: “OK, buddy, I can make you immortal, but for this I need to know on what basis your body works. Provide me please access to your database. And you connect the device to the Internet, where it was gradually being developed and learns what it needs and peculiarities of human biology. (Here it is possible for it escape to the Internet, but we omit details since this is not the main point) Then the AAI says: “I know how you become biologically immortal. It is necessary to replace every cell of your body with nanobiorobot. And fortunately, in the biology of your body there is almost nothing special that would block bio-immorality.. Many other organisms in the universe are also using DNA as a carrier of information. So I know how to program the DNA so as to create genetically modified bacteria that could perform the functions of any cell. I need access to the biological laboratory, where I can perform a few experiments, and it will cost you a million of your dollars.” You rent a laboratory, hire several employees, and finally the AAI issues a table with its’ solution of custom designed DNA, which are ordered in the laboratory by automated machine synthesis of DNA. http://​​en.wikipedia.org/​​wiki/​​DNA_sequencing Then they implant the DNA into yeast, and after several unsuccessful experiments they create a radio guided bacteria (shorthand: This is not truly a bacterium, since it appears all organelles and nucleus; also ‘radio’ is shorthand for remote controlled; a far more likely communication mechanism would be modulated sonic impulses) , which can synthesize a new DNA-based code based on commands from outside. Now the AAI has achieved independence from human ‘filtering’ of its’ true commands, because the bacterium has in effect its own remote controlled sequencers (self-reproducing to boot!). Now the AAI can transform and synthesize substances ostensibly introduced into test tubes for a benign test, and use them for a malevolent purpose., Obviously, at this moment Alien AI is ready to launch an attack against humanity. He can transfer himself to the level of nano-computer so that the source computer can be disconnected. After that AAI spraying some of subordinate bacteria in the air, which also have AAI, and they gradually are spread across the planet, imperceptibly penetrates into all living beings, and then start by the timer to divide indefinitely, as gray goo, and destroy all living beings. Once they are destroyed, Alien AI can begin to build their own infrastructure for the transmission of radio messages into space. Obviously, this fictionalized scenario is not unique: for example, AAI may seize power over nuclear weapons, and compel people to build radio transmitters under the threat of attack. Because of possibly vast AAI experience and intelligence, he can choose the most appropriate way in any existing circumstances. (Added by Freidlander: Imagine a CIA or FSB like agency with equipment centuries into the future, introduced to a primitive culture without concept of remote scanning, codes, the entire fieldcraft of spying. Humanity might never know what hit it, because the AAI might be many centuries if not millennia better armed than we (in the sense of usable military inventions and techniques ).

14. After that, this SETI-AI does not need people to realize any of its goals. This does not mean that it would seek to destroy them, but it may want to pre-empt if people will fight it—and they will.

15. Then this SETI-AI can do a lot of things, but more importantly, that it should do—is to continue the transfer of its communications-generated-embryos to the rest of the Universe. To do so, he will probably turn the matter in the solar system in the same transmitter as the one that sent him. In doing so the Earth and its’ people would be a disposable source of materials and parts—possibly on a molecular scale.

So, we examined a possible scenario of attack, which has 15 stages. Each of these stages is logically convincing and could be criticized and protected separately. Other attack scenarios are possible. For example, we may think that the message is not sent directly to us but is someone to someone else’s correspondence and try to decipher it. And this will be, in fact, bait.

But not only distribution of executable code can be dangerous. For example, we can receive some sort of “useful” technology that really should lead us to disaster (for example, in the spirit of the message “quickly shrink 10 kg of plutonium, and you will have a new source of energy” …but with planetary, not local consequences…). Such a mailing could be done by a certain “civilization” in advance to destroy competitors in the space. It is obvious that those who receive such messages will primarily seek technology for military use.

Analysis of possible goals

We now turn to the analysis of the purposes for which certain super civilizations could carry out such an attack.

1. We must not confuse the concept of a super-civilization with the hope for superkindness of civilization. Advanced does not necessarily mean merciful. Moreover, we should not expect anything good from extraterrestrial ‘kindness’. This is well written in Strugatsky’s novel “Waves stop wind.” Whatever the goal of imposing super-civilization upon us , we have to be their inferiors in capability and in civilizational robustness even if their intentions are well.. The historical example: The activities of Christian missionaries, destroying traditional religion. Moreover, we can better understand purely hostile objectives. And if the SETI attack succeeds, it may be only a prelude to doing us more ‘favors’ and ‘upgrades’ until there is scarcely anything human left of us even if we do survive…

2. We can divide all civilizations into the twin classes of naive and serious. Serious civilizations are aware of the SETI risks, and have got their own powerful AI, which can resist alien hacker attacks. Naive civilizations, like the present Earth, already possess the means of long-distance hearing in space and computers, but do not yet possess AI, and are not aware of the risks of AI-SETI. Probably every civilization has its stage of being “naive”, and it is this phase then it is most vulnerable to SETI attack. And perhaps this phase is very short. Since the period of the outbreak and spread of radio telescopes to powerful computers that could create AI can be only a few tens of years. Therefore, the SETI attack must be set at such a civilization. This is not a pleasant thought, because we are among the vulnerable.

3. If traveling with super-light speeds is not possible, the spread of civilization through SETI attacks is the fastest way to conquering space. At large distances, it will provide significant temporary gains compared with any kind of ships. Therefore, if two civilizations compete for mastery of space, the one that favored SETI attack will win.

4. The most important thing is that it is enough to begin a SETI attack just once, as it goes in a self-replicating the wave throughout the Universe, striking more and more naive civilizations. For example, if we have a million harmless normal biological viruses and one dangerous, then once they get into the body, we will get trillions of copies of the dangerous virus, and still only a million safe viruses. In other words, it is enough that if one of billions of civilizations starts the process and then it becomes unstoppable throughout the Universe. Since it is almost at the speed of light, countermeasures will be almost impossible.

5. Further, the delivery of SETI messages will be a priority for the virus that infected a civilization, and it will spend on it most of its energy, like a biological organism spends on reproduction—that is tens of percent. But Earth’s civilization spends on SETI only a few tens of millions of dollars, that is about one millionth of our resources, and this proportion is unlikely to change much for the more advanced civilizations. In other words, an infected civilization will produce a million times more SETI signals than a healthy one. Or, to say in another way, if in the Galaxy are one million healthy civilizations, and one infected, then we will have equal chances to encounter a signal from healthy or contaminated.

6. Moreover, there are no other reasonable prospects to distribute its code in space except through self-replication.

7. Moreover, such a process could begin by accident—for example, in the beginning it was just a research project, which was intended to send the results of its (innocent) studies to the maternal civilization, not causing harm to the host civilization, then this process became “cancer” because of certain propogative faults or mutations.

8. There is nothing unusual in such behavior. In any medium, there are viruses – there are viruses in biology, in computer networks—computer viruses, in conversation—meme. We do not ask why nature wanted to create a biological virus.

9. Travel through SETI attacks is much cheaper than by any other means. Namely, a civilization in Andromeda can simultaneously send a signal to 100 billion stars in our galaxy. But each space ship would cost billions, and even if free, would be slower to reach all the stars of our Galaxy.

10. Now we list several possible goals of a SETI attack, just to show the variety of motives.

  • To study the universe. After executing the code research probes are created to gather survey and send back information.

  • To ensure that there are no competing civilizations. All of their embryos are destroyed. This is preemptive war on an indiscriminate basis.

  • To preempt the other competing supercivilization (yes, in this scenario there are two!) before it can take advantage of this resource.

  • This is done in order to prepare a solid base for the arrival of spacecraft. This makes sense if super civilization is very far away, and consequently, the gap between the speed of light and near-light speeds of its ships (say, 0.5 c) gives a millennium difference.

  • The goal is to achieve immortality. Carrigan showed that the amount of human personal memory is on the order of 2.5 gigabytes, so a few exabytes (1 exabyte = 1 073 741 824 gigabytes) forwarding the information can send the entire civilization. (You may adjust the units according to how big you like your super-civilizations!)

  • Finally we consider illogical and incomprehensible (to us) purposes, for example, as a work of art, an act of self-expression or toys. Or perhaps an insane rivalry between two factions. Or something we simply cannot understand (For example, extraterrestrial will not understand why the Americans have stuck a flag into the Moon. Was it worthwhile to fly over 300000 km to install painted steel?)

11. Assuming signals propagated billions of light years distant in the Universe, the area susceptible to widespread SETI attack, is a sphere with a radius of several billion light years. In other words, it would be sufficient to find a one “bad civilization” in the light cone of a height of several billion years old, that is, that includes billions of galaxies from which we are in danger of SETI attack. Of course, this is only true, if the average density of civilization is at least one in the galaxy. This is an interesting possibility in relation to Fermi’s Paradox.

16. As the depth of scanning the sky rises linearly, the volume of space and the number of stars that we see increases by the cube of that number. This means that our chances to stumble on a SETI signal nonlinear grow by fast curve.

17. It is possible that when we stumble upon several different messages from the skies, which refute one another in a spirit of: “do not listen to them, they are deceiving voices, and wish you evil. But we, brother, we, are good—and wise…

18. Whatever positive and valuable message we receive, we can never be sure that all of this is not a subtle and deeply concealed threat. This means that in interstellar communication there will always be an element of distrust, and in every happy revelation, a gnawing suspicion.

19. A defensive posture regarding interstellar communication is only to listen, not sending anything that does not reveal its location. The laws prohibit the sending of a message from the United States to the stars. Anyone in the Universe who sends (transmits) self-evidently- is not afraid to show his position. Perhaps because the sending (for the sender) is more important than personal safety. For example, because it plans to flush out prey prior to attacks. Or it is forced to, by a evil local AI.

20. It was said about atomic bomb: The main secret about the atomic bomb is that it can be done. If prior to the discovery of a chain reaction Rutherford believed that the release of nuclear energy is an issue for the distant future, following the discovery any physicist knows that it is enough to connect two subcritical masses of fissionable material in order to release nuclear energy. In other words, if one day we find that signals can be received from space, it will be an irreversible event—something analogous to a deadly new arms race will be on.


The discussions on the issue raise several typical objections, now discussed.

Objection 1: Behavior discussed here is too anthropomorphic. In fact, civilizations are very different from each other, so you can’t predict their behavior.

Answer: Here we have a powerful observation selection effect. While a variety of possible civilizations exist, including such extreme scenarios as thinking oceans, etc., we can only receive radio signals from civilizations that send them, which means that they have corresponding radio equipment and has knowledge of materials, electronics and computing. That is to say we are threatened by civilizations of the same type as our own. Those civilizations, which can neither accept nor send radio messages, do not participate in this game.

Also, an observation selection effect concerns purposes. Goals of civilizations can be very different, but all civilizations intensely sending signals, will be only that want to tell something to “everyone”. Finally, the observation selection relates to the effectiveness and universality of SETI virus. The more effective it is, the more different civilizations will catch it and the more copies of the SETI virus radio signals will be in heaven. So we have the ‘excellent chances’ to meet a most powerful and effective virus.

Objection 2. For super-civilizations there is no need to resort to subterfuge. They can directly conquer us.


This is true only if they are in close proximity to us. If movement faster than light is not possible, the impact of messages will be faster and cheaper. Perhaps this difference becomes important at intergalactic distances. Therefore, one should not fear the SETI attack from the nearest stars, coming within a radius of tens and hundreds of light-years.

Objection 3. There are lots of reasons why SETI attack may not be possible. What is the point to run an ineffective attack?

Answer: SETI attack does not always work. It must act in a sufficient number of cases in line with the objectives of civilization, which sends a message. For example, the con man or fraudster does not expect that he would be able “to con” every victim. He would be happy to steal from even one person inone hundred. It follows that SETI attack is useless if there is a goal to attack all civilizations in a certain galaxy. But if the goal is to get at least some outposts in another galaxy, the SETI attack fits. (Of course, these outposts can then build fleets of space ships to spread SETI attack bases outlying stars within the target galaxy.)

The main assumption underlying the idea of SETI attacks is that extraterrestrial super civilizations exist in the visible universe at all. I think that this is unlikely for reasons related to antropic principle. Our universe is unique from 10 ** 500 possible universes with different physical properties, as suggested by one of the scenarios of string theory. My brain is 1 kg out of 10 ** 30 kg in the solar system. Similarly, I suppose, the Sun is no more than about 1 out of 10 ** 30 stars that could raise a intelligent life, so it means that we are likely alone in the visible universe.

Secondly the fact that Earth came so late (i.e. it could be here for a few billion years earlier), and it was not prevented by alien preemption from developing, argues for the rarity of intelligent life in the Universe. The putative rarity of our civilization is the best protection against attack SETI. On the other hand, if we open parallel worlds or super light speed communication, the problem arises again.

Objection 7. Contact is impossible between post-singularity supercivilizations, which are supposed here to be the sender of SETI-signals, and pre- singularity civilization, which we are, because supercivilization is many orders of magnitude superior to us, and its message will be absolutely not understandable for us—exactly as the contact between ants and humans is not possible. (A singularity is the time of creation of artificial intelligence capable of learning, (and beginning an exponential booting in recursive improving self-design of further intelligence and much else besides) after which civilization make leap in its development—on Earth it may be possible in the area in 2030.)

Answer: In the proposed scenario, we are not talking about contact but a purposeful deception of us. Similarly, a man is quite capable of manipulating behavior of ants and other social insects, whose objectives are is absolutely incomprehensible to them. For example, LJ user “ivanov-petrov” describes the following scene: As a student, he studied the behavior of bees in the Botanical Garden of Moscow State University. But he had bad relations with the security guard controlling the garden, which is regularly expelled him before his time. Ivanov-Petrov took the green board and developed in bees conditioned reflex to attack this board. The next time the watchman came, who constantly wore a green jersey, all the bees attacked him and he took to flight. So “ivanov-petrov” could continue research. Such manipulation is not a contact, but this does not prevent its’ effectiveness.

“Objection 8. For civilizations located near us is much easier to attack us –for ‘guaranteed results’—using starships than with SETI-attack.

Answer. It may be that we significantly underestimate the complexity of an attack using starships and, in general, the complexity of interstellar travel. To list only one factor, the potential ‘minefield’ characteristics of the as-yet unknown interstellar medium.

If such an attack would be carried out now or in the past, the Earth’s civilization has nothing to oppose it, but in the future the situation will change—all matter in the solar system will be full of robots, and possibly completely processed by them. On the other hand, the more the speed of enemy starships approaching us, the more the fleet will be visible by its braking emissions and other characteristics. These quick starships would be very vulnerable, in addition we could prepare in advance for its arrival. A slowly moving nano- starship would be very less visible, but in the case of wishing to trigger a transformation of full substance of the solar system, it would simply be nowhere to land (at least without starting an alert in such a ‘nanotech-settled’ and fully used future solar system. (Friedlander added: Presumably there would always be some ‘outer edge’ of thinly settled Oort Cloud sort of matter, but by definition the rest of the system would be more densely settled, energy rich and any deeper penetration into solar space and its’ conquest would be the proverbial uphill battle—not in terms of gravity gradient, but in terms of the available resources of war against a full Class 2 Kardashev civilization.)

The most serious objection is that an advanced civilization could in a few million years sow all our galaxy with self replicating post singularity nanobots that could achieve any goal in each target star-system, including easy prevention of the development of incipient other civilizations. (In the USA Frank Tipler advanced this line of reasoning.) However, this could not have happened in our case—no one has prevented development of our civilization. So, it would be much easier and more reliable to send out robots with such assignments, than bombardment of SETI messages of the entire galaxy, and if we don’t see it, it means that no SETI attacks are inside our galaxy. (It is possible that a probe on the outskirts of the solar system expects manifestations of human space activity to attack – a variant of the “Berserker” hypothesis—but it will not attack through SETI). Probably for many millions or even billions of years microrobots could even reach from distant galaxies at a distance of tens of millions of light-years away. Radiation damage may limit this however without regular self-rebuilding.

In this case SETI attack would be meaningful only at large distances. However, this distance—tens and hundreds of millions of light-years—probably will require innovative methods of modulation signals, such as management of the luminescence of active nuclei of galaxies. Or transfer a narrow beam in the direction of our galaxy (but they do not know where it will be over millions of years). But a civilization, which can manage its’ galaxy’s nucleus, might create a spaceship flying with near-light speeds, even if its mass is a mass of the planet. Such considerations severely reduce the likelihood of SETI attacks, but not lower it to zero, because we do not know all the possible objectives and circumstances.

(An comment by JF :For example the lack of SETI-attack so far may itself be a cunning ploy: At first receipt of the developing Solar civilization’s radio signals, all interstellar ‘spam’ would have ceased, (and interference stations of some unknown (but amazing) capability and type set up around the Solar System to block all coming signals recognizable to its’ computers as of intelligent origin,) in order to get us ‘lonely’ and give us time to discover and appreciate the Fermi Paradox and even get those so philosophically inclined to despair desperate that this means the Universe is apparently hostile by some standards. Then, when desperate, we suddenly discover, slowly at first, partially at first, and then with more and more wonderful signals, the fact that space is filled with bright enticing signals (like spam). The blockade, cunning as it was (analogous to Earthly jamming stations) was yet a prelude to a slow ‘turning up’ of preplanned intriguing signal traffic. If as Earth had developed we had intercepted cunning spam followed by the agonized ‘don’t repeat our mistakes’ final messages of tricked and dying civilizations, only a fool would heed the enticing voices of SETI spam. But now, a SETI attack may benefit from the slow unmasking of a cunning masquerade as first a faint and distant light of infinite wonder, only at the end revealed as the headlight of an onrushing cosmic train…)

AT comment to it. In fact I think that SETI attack senders are on the distances more than 1000 ly and so they do not know yet that we have appeared. But so called Fermi Paradox indeed maybe a trick – senders deliberately made their signals weak in order to make us think that they are not spam.

The scale of space strategy may be inconceivable to the human mind.

And we should note in conclusion that some types of SETI-attack do not even need a computer but just a man who could understand the message that then “set his mind on fire”. At the moment we cannot imagine such a message, but we can give some analogies. Western religions are built around the text of the Bible. It can be assumed that if the text of the Bible appeared in some countries, which had previously not been familiar with it, there might arise a certain number of biblical believers. Similarly subversive political literature, or even some superideas, “sticky” memes or philosophical mind-benders. Or, as suggested by Hans Moravec, we get such a message: “Now that you have received and decoded me, broadcast me in at least ten thousand directions with ten million watts of power. Or else.”—this message is dropped, leaving us guessing, what may indicate that “or else”. Even a few pages of text may contain a lot of subversive information—Imagine that we could send a message to the 19 th century scientists. We could open them to the general principle of the atomic bomb, the theory of relativity, the transistors—and thus completely change the course of technological history, and we could add that all the ills in the 20 century were from Germany (which is only partly true) , then we would have influenced the political history.

(Comment of JF: Such a latter usage would depend on having received enough of Earth’s transmissions to be able to model our behavior and politics. But imagine a message as posing from our own future, to ignite ‘catalytic war’—Automated SIGINT (signals intelligence) stations are constructed monitoring our solar system, their computers ‘cracking’ our language and culture (possibly with the aid of children’s television programs with see and say matching of letters and sounds, from TV news showing world maps and naming countries possibly even from intercepting wireless internet encyclopedia articles. ) Then a test or two may follow, posting a what if scenario inviting comment from bloggers, about a future war say between the two leading powers of the planet. (For purposes of this discussion, say around 2100 present calendar China is strongest and India rising fast). Any defects and nitpicks in the comments of the blog are noted and corrected. Finally, an actual interstellar message is sent with the debugged scenario(not shifting against the stellar background, it is unquestionably interstellar in origin) proporting to be from a dying starship of the presently stronger side’s (China’s) future, when the presently weaker side (India’s) space fleet has smashed the future version of the Chinese State and essentially committed genocide. The starship has come back in time, but is dying, and indeed the transmission ends, or simply repeats, possibly after some back and forth communication between the false computer models of the ‘starship commander’ and the Chinese government. The reader can imagine the urgings of the future Chinese military council to preempt to forestall doom. If as seems probable, such a strategy is too complicated to carry off in one stage, various ‘future travellers’ may emerge from a war, signal for help in vain, and ‘die’ far outside our ability to reach them, (say some light days away, near the alleged location of an ‘emergence gate’ but near an actual transmitter) Quite a drama may emerge as the computer learns to ‘play’ us like a con man, ship after ship of various nationalities dribbling out stories but also getting answers to key questions for aid in constructing the emerging scenario which will be frighteningly believable, enough to ignite a final war. Possibly lists of key people in China (or whatever side is stronger) may be drawn up by the computer with a demand that they be executed as the parents of future war criminals—sort of an International Criminal Court –acting as Terminator scenario. Naturally the Chinese state, at that time the most powerful in the world, would guard its’ rulers lives against any threat. Yet more refugee spaceships of various nationalities can emerge transmit and die, offering their own militaries terrifying new weapons technologies from unknown sciences that really work (more ‘proof’ of their future origin). Or weapons from known sciences, for example decoding online DNA sequences in the future internet and constructing formulae for DNA constructors to make specific tailored genetic weapons against particular populations—that endure in the ground, a scorched earth against a particular population on a particular piece of land. These are copied and spread worldwide as are totally accurate plans—in standard CNC codes for easy to construct thermonuclear weapons in the 1950s style—using U-238 for casing, and only a few kilograms of fissionable material for ignition By that time well over a million tons of depleted uranium will be worldwide, and deuterium is free in the ocean and can be used directly in very large weapons without lithium deuteride. Knowing how to hack together a wasteful, more than critical mass crude fission device is one thing (the South African device was of this kind). But knowing –with absolute accuracy, down to machining drawings, CNC codes, etc how to make high-yield, super efficient very dirty thermonuclear weapons without need for testing means that any small group with a few dozen million dollars and automated machine tools can clandestinely make a multi-megaton device –or many— and smash the largest cities. And any small power with a few dozen jets can cripple a continent for a decade. Already over a thousand tons of plutonium exist. The SETI spam can include CNC codes for making a one shot reactor plutonium chemical refiner that would be left hopelessly radioactive but output chemically pure plutonium. (This would be prone to predetonation because of the Pu-240 content but then plans for debugged laser isotope separators may also be downloaded). This is a variant of the ‘catalytic war’ and ‘nuclear six gun’ (i.e. easy to obtain weapons) scenarios of the late Herman Kahn. Even cheaper would be bioattacks of the kind outlined above. The principle point is that planet killer weapons fully debugged take great amounts of debugging, tens to hundreds of billions of dollars, and free access to a world scientific community. Today, it is to every great power’s advantage to keep accurate designs out of the hands of third parties because they have to live on the same planet (and because the fewer weapons, the easier it is to stay a great power). Not so the SETI spam authors. Without the hundreds of billions in R and D, the actual construction budget would be on the order of a million dollars per multi-megaton device (depending on the expense of obtaining the raw reactor plutonium) If wishing to extend today’s scenarios into the future, the SETI spam authors manipulate Georgia (with about a $10 billion GDP) to arm against Russia and Taiwan against China and Venezuela against the USA. Although Russian and China and the USA could respectively promise annihilation against any attacker, with a military budget around 4% of GDP and the downloaded plans, the reverse—for the first time—could then also be true. (400 100 megaton bombs can kill by fallout perhaps 95% of unprotected populations over a country the size of the USA or China and 90% of a country the size of Russia, assuming the worst kind of cooperation from the winds.—from an old chart by Ralph Lapp) Anyone living near a superarmed microstate with border conflicts will, of course, wish to arm themselves. And these newly armed states themselves—of course—will have borders. Note that this drawn out scenario gives lots of time for a huge arms buildup on both (or many!) sides, and a Second Cold War that eventually turns very hot indeed…and unlike a human player of such a horrific ‘catalytic war’ con game, worldwide fallout or enduring biocontamination is not a concern at all… ()


The product of the probabilities of the following events describes the probability of attack. For these probabilities, we can only give so-called «expert» assessment, that is, assign them a certain a priori subjective probability as we do now.

1) The likelihood that extraterrestrial civilizations exist at a distance at which radio communication is possible with them. In general, I agree with the view of Shklovsky and supporters of the “Rare Earth” hypothesis—that the Earth’s civilization is unique in the observable universe. This does not mean that extraterrestrial civilizations do not exist at all (because the universe, according to the theory of cosmological inflation, is almost endless) - they are just over the horizon of events visible from our point in space-time. In addition, this is not just about distance, but also of the distance at which you can establish a connection, which allows transferring gigabytes of information. (However, passing even 1 bit per second, you can submit 1-gigabit for about 20 years, which may be sufficient for the SETI-attack.) If in the future will be possible some superluminal communication or interaction with parallel universes, it would dramatically increase the chances of SETI attacks. So, I appreciate this chance to 10%.

2) The probability that SETI-attack is technically feasible: that is, it is possible computer program, with recursively self-improving AI and sizes suitable for shipping. I see this chance as high: 90%.

3) The likelihood that civilizations that could have carried out such attack exist in our space-time cone—this probability depends on the density of civilizations in the universe, and of whether the percentage of civilizations that choose to initiate such an attack, or, more importantly, obtain victims and become repeaters. In addition, it is necessary to take into account not only the density of civilizations, but also the density created by radio signals. All these factors are highly uncertain. It is therefore reasonable to assign this probability to 50%.

4) The probability that we find such a signal during our rising civilization’s period of vulnerability to it. The period of vulnerability lasts from now until the moment when we will decide and be technically ready to implement this decision: Do not download any extraterrestrial computer programs under any circumstances. Such a decision may only be exercised by our AI, installed as world ruler (which in itself is fraught with considerable risk). Such an world AI (WAI) can be in created circa 2030. We cannot exclude, however, that our WAI still will not impose a ban on the intake of extraterrestrial messages, and fall victim to attacks by the alien artificial intelligence, which by millions of years of machine evolution surpasses it. Thus, the window of vulnerability is most likely about 20 years, and “width” of the window depends on the intensity of searches in the coming years. This “width” for example, depends on the intensity of the current economic crisis of 2008-2010, from the risks of World War III, and how all this will affect the emergence of the WAI. It also depends on the density of infected civilizations and their signal strength— as these factors increase, the more chances to detect them earlier. Because we are a normal civilization under normal conditions, according to the principle of Copernicus, the probability should be large enough; otherwise a SETI-attack would have been generally ineffective. (The SETI-attack, itself (here supposed to exist) also are subject to a form of “natural selection” to test its effectiveness. (In the sense that it works or does not. ) This is a very uncertain chance we will too, over 50%.

5) Next is the probability that SETI-attack will be successful—that is that we swallow the bait, download the program and description of the computer, run them, lose control over them and let them reach all their goals. I appreciate this chance to be very high because of the factor of multiplicity—that is the fact that the message is downloaded repeatedly, and someone, sooner or later, will start it. In addition, through natural selection, most likely we will get the most effective and deadly message that will most effectively deceive our type of civilization. I consider it to be 90%.

6) Finally, it is necessary to assess the probability that SETI-attack will lead to a complete human extinction. On the one hand, it is possible to imagine a “good” SETI-attack, which is limited so that it will create a powerful radio emitter behind the orbit of Pluto. However, for such a program will always exist the risk that a possible emergent society at its’ target star will create a powerful artificial intelligence, and effective weapon that would destroy this emitter. In addition, to create the most powerful transponder would be needed all the substance of solar system and the entire solar energy. Consequently, the share of such “good” attacks will be lower due to natural selection, as well as some of them will be destroyed sooner or later by captured by them civilizations and their signals will be weaker. So the chances of destroying all the people with the help of SETI-attack that has reached all its goals, I appreciate in 80%.

As a result, we have: 0.1h0.9h0.5h0.5h0.9h0.8 = 1.62%

So, after rounding, the chances of extinction of Man through SETI attack in XXI century is around 1 per cent with a theoretical precision of an order of magnitude.

Our best protection in this context would be that civilization would very rarely met in the Universe. But this is not quite right, because the Fermi paradox here works on the principle of “Neither alternative is good”:

  • If there are extraterrestrial civilizations, and there are many of them, it is dangerous because they can threaten us in one way or another.

  • If extraterrestrial civilizations do not exist, it is also bad, because it gives weight to the hypothesis of inevitable extinction of technological civilizations or of our underestimating of frequency of cosmological catastrophes. Or, a high density of space hazards, such as gamma-ray bursts and asteroids that we underestimate because of the observation selection effect—i.e., were we not here because already killed, we would not be making these observations….

Theoretically possible is a reverse option, which is that through SETI will come a warning message about a certain threat, which has destroyed most civilizations, such as: “Do not do any experiments with X particles, it could lead to an explosion that would destroy the planet.” But even in that case remain a doubt, that there is no deception to deprive us of certain technologies. (Proof would be if similar reports came from other civilizations in space in the opposite direction.) But such communication may only enhance the temptation to experiment with X-particles.

So I do not appeal to abandon SETI searches, although such appeals are useless.

It may be useful to postpone any technical realization of the messages that we could get on SETI, up until the time when we will have our Artificial Intelligence. Until that moment, perhaps, is only 10-30 years, that is, we could wait. Secondly, it would be important to hide the fact of receiving dangerous SETI signal its essence and the source location.

This risk is related to a methodologically interesting aspect. Despite the fact that I have thought every day in the last year and read on the topic of global risks, I found this dangerous vulnerability in SETI only now. By hindsight, I was able to find another four authors who came to similar conclusions. However, I have made a significant finding: that there may be not yet open global risks, and even if the risk of certain constituent parts are separately known to me, it may take a long time to join them into a coherent picture. Thus, hundreds of dangerous vulnerabilities may surround us, like an unknown minefield. Only when the first explosion happens will we know. And that first explosion may be the last.

An interesting question is whether Earth itself could become a source of SETI-attack in the future when we will have our own AI. Obviously, that could. Already in the program of METI exists an idea to send the code of human DNA. (The “children’s message scenario” – in which the children ask to take their piece of DNA and clone them on another planet –as depicted in the film “Calling all aliens”.)


1. Hoyle F. Andromeda. http://​​en.wikipedia.org/​​wiki/​​A_for_Andromeda

2. Yudkowsky E. Artificial Intelligence as a Positive and Negative Factor in Global Risk. Forthcoming in Global Catastrophic Risks, eds. Nick Bostrom and Milan Cirkovic http://​​www.singinst.org/​​upload/​​artificial-intelligence-risk.pdf

3.Moravec Hans. Mind Children: The Future of Robot and Human Intelligence, 1988.

4.Carrigan, Jr. Richard A. The Ultimate Hacker: SETI signals may need to be decontaminated http://​​home.fnal.gov/​​~carrigan/​​SETI/​​SETI%20Decon%20Australia%20poster%20paper.pdf

5. Carrigan’s page http://​​home.fnal.gov/​​~carrigan/​​SETI/​​SETI_Hacker.htm