Oh FFS. If an alien origin artificial intelligence explosion occurred in our past lightcone, it was non-hostile, or at least not a paper-clip optimizer. And either it just flat out did not care about the stars or it is already here, studying us from vantage points immune to our perception.
Which is not a difficult feat : Miniaturization and predictive avoidance would do it. We could be living in a full panopticon and never know, as long as the individual motes are sufficiently small to avoid direct perception and sufficiently mobile to not get caught in instrumentation. Hmm. As a theory for why the universe hasn’t been turned into computing hardware for a run-away machine, universal dispersal of a friendlish (It obviously does not do requests) AI is quite reasonable. If this is indeed the case, anyone building a /hostile/ AI will just be stopped by the security hardware laid down by the ancients.
Note: If such a system does not already exist, upon success of Friendly AI, have one built.
Star-travel is a difficult feat for biological entities. Given the level of competency you are ascribing to alien AI, the gap between the stars would be trivial, and there would be no need, nor indeed any point to relying on local assistance. The time frame of complex life on planets is enormously long, and once an intelligence has spawned an AI, machines could easily outlast both that biosphere, and the local star. Alien AI that could be of concern to us is exceedingly unlikely to be of recent origin, which means it either does not exist, or it is already in the solar system. And has been here since before man discovered fire.
Given the level of competency you are ascribing to alien AI
Yes. This post is writing a scary story, then being convinced by how scary it is. “You can’t prove it’s impossible!” is not a reason to waste any effort considering this negligible probability, just because humans are very bad at ignoring negligible probabilities.
I’m wondering to what degree scary campfire stories for amateur philosophers could be said to be a local literary form.
.. The prime directive is bullshit, but I am actually having some considerable difficulty thinking of an appropriate protocol for dealing with alien life from the perspective of deep time. When sending out a swarm of AI to safeguard against someone else doing something deeply stupid to the universe at large, the only things it is at all likely to encounter are apex civilizations that have already successfully dealt with these issues, as demonstrated by their not being dead (and—having home court advantage, such societies will eat it for lunch if it tries anything at all they do not like) or ecosystems which do not yet have tool users in them at all. In the second case, it can dig in and wait. But having it start granting wishes to the first proto-sapient to evolve does not seem.. advisable. The minimal-intervention rule would be “Do not permit anyone to inflict damage to the universe/galaxy at large” but there are a whole bunch of options escalating from there.
Paranoia: Can anyone think of a good way to check for already installed hardware of this type? EMP a random spot and go through the dust with a microscope?
Well. Given my opinion on the ethicalness of nature, my own instructions to such a swarm would be to destroy all life. Through uploading, for the smarter parts, but at any rate stop nature from existing.
It might also be nice to shut off all the stars, because they’re really wasting a lot of energy.
Oh FFS. If an alien origin artificial intelligence explosion occurred in our past lightcone, it was non-hostile, or at least not a paper-clip optimizer. And either it just flat out did not care about the stars or it is already here, studying us from vantage points immune to our perception.
Which is not a difficult feat : Miniaturization and predictive avoidance would do it. We could be living in a full panopticon and never know, as long as the individual motes are sufficiently small to avoid direct perception and sufficiently mobile to not get caught in instrumentation. Hmm. As a theory for why the universe hasn’t been turned into computing hardware for a run-away machine, universal dispersal of a friendlish (It obviously does not do requests) AI is quite reasonable. If this is indeed the case, anyone building a /hostile/ AI will just be stopped by the security hardware laid down by the ancients.
Note: If such a system does not already exist, upon success of Friendly AI, have one built.
Star-travel is a difficult feat for biological entities. Given the level of competency you are ascribing to alien AI, the gap between the stars would be trivial, and there would be no need, nor indeed any point to relying on local assistance. The time frame of complex life on planets is enormously long, and once an intelligence has spawned an AI, machines could easily outlast both that biosphere, and the local star. Alien AI that could be of concern to us is exceedingly unlikely to be of recent origin, which means it either does not exist, or it is already in the solar system. And has been here since before man discovered fire.
Yes. This post is writing a scary story, then being convinced by how scary it is. “You can’t prove it’s impossible!” is not a reason to waste any effort considering this negligible probability, just because humans are very bad at ignoring negligible probabilities.
I’m wondering to what degree scary campfire stories for amateur philosophers could be said to be a local literary form.
Not exactly like that, I hope. Death still sucks, and adding an afterlife isn’t a dramatic improvement.
.. The prime directive is bullshit, but I am actually having some considerable difficulty thinking of an appropriate protocol for dealing with alien life from the perspective of deep time. When sending out a swarm of AI to safeguard against someone else doing something deeply stupid to the universe at large, the only things it is at all likely to encounter are apex civilizations that have already successfully dealt with these issues, as demonstrated by their not being dead (and—having home court advantage, such societies will eat it for lunch if it tries anything at all they do not like) or ecosystems which do not yet have tool users in them at all. In the second case, it can dig in and wait. But having it start granting wishes to the first proto-sapient to evolve does not seem.. advisable. The minimal-intervention rule would be “Do not permit anyone to inflict damage to the universe/galaxy at large” but there are a whole bunch of options escalating from there.
Paranoia: Can anyone think of a good way to check for already installed hardware of this type? EMP a random spot and go through the dust with a microscope?
Well. Given my opinion on the ethicalness of nature, my own instructions to such a swarm would be to destroy all life. Through uploading, for the smarter parts, but at any rate stop nature from existing.
It might also be nice to shut off all the stars, because they’re really wasting a lot of energy.