Does mankind have a duty to warn extraterrestrial civilizations that we might someday unintentionally build an unfriendly super-intelligent AI that expands at the speed of light gobbling up everything in its path? Assuming that the speed of light really is the maximum, our interstellar radio messages would outpace any paperclip maximizer. Obviously any such message would complicate future alien contact events as the aliens would worry that our ambassador was just an agent for a paperclipper. The act of warning others would be a good way to self-signal the dangers of AI.
This depends on the solution to the Fermi paradox. An advanced civilization might have decided to not build defenses against a paperclip maximizer because it figured no other civilization would be stupid/evil enough to attempt AI without a mathematical proof that its AI would be friendly. A civilization near our level of development might use the information to accelerate its AI program. If a paperclip maximizer beats everything else an advanced civilization might respond to the warning by moving away from us as fast as possible taking advantage of the expansion of the universe to hopefully get in a different Hubble volume from us.
Does mankind have a duty to warn extraterrestrial civilizations that we might someday unintentionally build an unfriendly super-intelligent AI that expands at the speed of light gobbling up everything in its path?
One response to such a warning would be to build a super-intelligent AI that expands at the speed of light gobbling up everything in its path first.
And when the two (or more) collide, it would make a nice SF story :-)
This wouldn’t be a horrible outcome, because the two civilizations light-cones would never fully intersect. Neither civilization would fully destroy the other.
The light cones might not fully intersect, but humans do not expand at close to the speed of light. It’s enough to be able to destroy the populated planets.
What could the alien civilizations do? Suppose SETI decoded “Hi from the Andromeda Galaxy! BTW, nanobots might consume your planet in 23 years, so consider fleeing for your lives.” Is there anything humans could do?
The costs might be high. Suppose our message saves an alien civilization one thousand light-years away, but delays a positive singularity by three days. By the time our colonizers reach the alien planet, the opportunity cost would be a three-light-day deep shell of a thousand light-year sphere. Most of the volume of a sphere is close to the surface, so this cost is enormous. Giving the aliens an escape ark when we colonize their planet would be quintillions of times less expensive. Of course, a paperclipper would do no such thing.
It may be presumptuous to warn about AI. Perhaps the correct message to say is something like “If you think of a clever experiment to measure dark energy density, don’t do it.”
It depends on your stage of development. You might build a defense, flee at close to the speed of light and take advantage of the universe’s expansion to get into a separate Hubble volume from mankind, accelerate your AI program, or prepare for the possibility of annihilation.
Good point, and the resources we put into signaling could instead be used to research friendly AI.
The warming should be honest and give our best estimates.
Does mankind have a duty to warn extraterrestrial civilizations that we might someday unintentionally build an unfriendly super-intelligent AI that expands at the speed of light gobbling up everything in its path? Assuming that the speed of light really is the maximum, our interstellar radio messages would outpace any paperclip maximizer. Obviously any such message would complicate future alien contact events as the aliens would worry that our ambassador was just an agent for a paperclipper. The act of warning others would be a good way to self-signal the dangers of AI.
I’d have thought any extraterrestrial civilization capable of doing something useful with the information wouldn’t need the explicit warning.
This depends on the solution to the Fermi paradox. An advanced civilization might have decided to not build defenses against a paperclip maximizer because it figured no other civilization would be stupid/evil enough to attempt AI without a mathematical proof that its AI would be friendly. A civilization near our level of development might use the information to accelerate its AI program. If a paperclip maximizer beats everything else an advanced civilization might respond to the warning by moving away from us as fast as possible taking advantage of the expansion of the universe to hopefully get in a different Hubble volume from us.
One response to such a warning would be to build a super-intelligent AI that expands at the speed of light gobbling up everything in its path first.
And when the two (or more) collide, it would make a nice SF story :-)
This wouldn’t be a horrible outcome, because the two civilizations light-cones would never fully intersect. Neither civilization would fully destroy the other.
Are you crazy! Think of all the potential paperclips that wouldn’t come into being!!
The light cones might not fully intersect, but humans do not expand at close to the speed of light. It’s enough to be able to destroy the populated planets.
I love this idea! A few thoughts:
What could the alien civilizations do? Suppose SETI decoded “Hi from the Andromeda Galaxy! BTW, nanobots might consume your planet in 23 years, so consider fleeing for your lives.” Is there anything humans could do?
The costs might be high. Suppose our message saves an alien civilization one thousand light-years away, but delays a positive singularity by three days. By the time our colonizers reach the alien planet, the opportunity cost would be a three-light-day deep shell of a thousand light-year sphere. Most of the volume of a sphere is close to the surface, so this cost is enormous. Giving the aliens an escape ark when we colonize their planet would be quintillions of times less expensive. Of course, a paperclipper would do no such thing.
It may be presumptuous to warn about AI. Perhaps the correct message to say is something like “If you think of a clever experiment to measure dark energy density, don’t do it.”
It depends on your stage of development. You might build a defense, flee at close to the speed of light and take advantage of the universe’s expansion to get into a separate Hubble volume from mankind, accelerate your AI program, or prepare for the possibility of annihilation.
Good point, and the resources we put into signaling could instead be used to research friendly AI.
The warming should be honest and give our best estimates.
Quite.
The outer thee days of a 1000 Ly sphere account for 0.0025% of its volume.