This person claims that all AI will rationally kill themselves and that the great filter would be after AI.
http://www.science20.com/alpha_meme/deadly_proof_published_your_mind_stable_enough_read_it-126876
(I havn’t got the paper but even if this is correct, to me it still would not explain the filter fully because a civilization could make a simple interstellar Replicator e.g. light sail propelled asteroid mining robot and let it lose before going AI and we see no evidence of these)
Also what about the Planetarium/galactic zoo/enforced noninterference possibility. Say that 99% of the time AI will take over the light cone destructively, but 1% of the time the AI will desire to watch and catalog intelligence arising then darkly wipe it out when it gets annoying and tries to colonize other stars and hence stuff up other experiments. Or more nicely it could welcome us to the galaxy and stop us from wiping out other civilizations etc.
For us it would mean that we got lucky with a 1% chance say 1 billion years ago when the first intelligent civilization arose, spread through the galaxy/light cone and made the watching/enforcing AI. (or made the watching AI then fought itself etc) There could have been ~1 million space-faring civilizations in the galaxy since and we are nothing special at all, on an average star in the middle age of the universe. In the case the filter is sort of ahead of us because we cannot expand and colonize—the much more advanced AI would stop us.
Either way if we make a simple replicator and have it successfully reach another solar system (with possibly habitable planets) then that would seem to demonstrate that the filter is behind us. We would have then done something that we can be sure noone else in the galaxy has done before as I have said we see no evidence of such replicators. I am talking about one that could not land on planets, just rearrange asteroids and similar objects with very low gravity.
if we make a simple replicator and have it successfully reach another solar system (with possibly habitable planets) then that would seem to demonstrate that the filter is behind us.
Excellent! So, wouldn’t that mean that the best way to eliminate x-risk would be to do exactly that?
It is counterintuitive, because “eliminating x-risk” implies some activity, some fixing of something. But we eliminated the risk of devastating asteroid impact not by nuking any dangerous ones, but by mapping all of them and concluding the risk didn’t exist. As it happens, that was also much cheaper than any asteroid deflection could have been.
If sending out an interstellar replicator was proof we’re further ahead (i.e. less vulnerable) than anything that could have evolved inside this galaxy since the dawn of time, it seems mightily important to become more certain we can do that (without AI). If some variant of our interstellar replicator was capable of enabling intergalactic travel, that’d raise our expectation of comparative invulnerability because we’d know we’ve gone past obstacles that nothing inside some fraction of our light cone even outside our galaxy has been able to master.
Ideally we’d actually demonstrate that of course, but for the purpose of eliminating (perceived) x-risk, a highly evolved and believable model of how it could be done should go much of the way.
Of course we might find out that self-replicating spacecraft are a lot harder than they look, but that too would be information that is valuable for the long-term survival of our species.
Armstrong and Sandberg claim the feasibility of self-replicating spacecraft has been a settled matter since the Freitag design in 1980. But that paper, while impressively detailed and a great read, glosses over the exact computing abilities such a system would need, does not mention hardening against interstellar radiation, and probably has a bunch of other problems that I’m not qualified to discover. I haven’t looked at all the papers that cite it (yet), but the once I’ve seen seem to agree self-replicating spacecraft are plausible.
I posit that greater certainty on that point would be of outsized value to our species. So why aren’t we researching it? Am I overlooking something?
Armstrong and Sandberg claim the feasibility of self-replicating spacecraft has been a settled matter since the Freitag design in 1980.
Actually, bacteria, seeds and acorns are our strongest arguments for self-replication, along with the fact that humans can generally copy or co-opt natural processes for our own uses.
Thanks for the comment. Yes I agree that if we had made such a replicator and set it loose then that would say a lot about the filter. To claim that the filter was still ahead of us in that case you would need to make the more bizarre claim that we would with almost 100% probability seek and destroy the replicators and almost all similar civilizations would do the same, then proceed not to expand again.
I am not sure that a highly believable model would go most of the way because there may be a short window between having a model, then AI issues changing things so it isn’t built. It seems pretty believable for the case of mankind that there would be a very short time between building such an thing and going full AI, so to be sure you would actually have to build it and let it loose.
I am not sure why it isn’t given much more attention. Perhaps many people don’t believe that AI can be part of the filter e.g. the site overcomingbias.com. Also I expect there would be massive moral opposition to letting such a replicator loose from some people! How dare we disturb the whole galaxy in such an unintelligent way. Thats why I mention the simple one that just rearranges small asteroids. It would not wipe out life as we know it but would prove that we were past the filter as such a thing has not been done in our galaxy. I sure would be interested in seeing it researched. Perhaps someone with more kudos can promote it?
Likely a replicator would be a consequence of asteroid mining anyway as the best, cheapest way to get materials from asteroids is if it is all automatic.
Imagine if we had made a replicator, demonstrated that it could make copies of itself, established with as high confidence as we could that it could survive the trip to another star, and had let >100,000 of the things off heading to all sorts of stars in the neighborhood. They would eventually (very soon compared to a billion years) visit every star in the galaxy and that would tell us a lot about the Fermi paradox and great filter.
As I said before (discounting planetarium hypothesis) we could have a high degree of confidence that the great filter was then behind us. It couldn’t really be the case that thousands of civilizations in our galaxy had done such a thing, then changed their mind and destroyed all the replicators as some civilizations would probably destroy themselves between letting the replicators loose and changing their mind, or not change their mind/not care about the replicators. Therefore we would see evidence of their replicators in our solar system which we don’t see.
The other way we can be sure the filter is behind us is successfully navigate the Singularity (keeping roughly the same values). That seems obviously MUCH more difficult to have confidence in.
If our goal is to make sure the filter is behind us then it is best to do it with a plan we can understand and quantify. Holding off human level AI until the replicators have been let loose seems to be the highest probability way to do that, but no-one has said such a thing before now as far as I am aware.
This person claims that all AI will rationally kill themselves and that the great filter would be after AI. http://www.science20.com/alpha_meme/deadly_proof_published_your_mind_stable_enough_read_it-126876 (I havn’t got the paper but even if this is correct, to me it still would not explain the filter fully because a civilization could make a simple interstellar Replicator e.g. light sail propelled asteroid mining robot and let it lose before going AI and we see no evidence of these)
Also what about the Planetarium/galactic zoo/enforced noninterference possibility. Say that 99% of the time AI will take over the light cone destructively, but 1% of the time the AI will desire to watch and catalog intelligence arising then darkly wipe it out when it gets annoying and tries to colonize other stars and hence stuff up other experiments. Or more nicely it could welcome us to the galaxy and stop us from wiping out other civilizations etc.
For us it would mean that we got lucky with a 1% chance say 1 billion years ago when the first intelligent civilization arose, spread through the galaxy/light cone and made the watching/enforcing AI. (or made the watching AI then fought itself etc) There could have been ~1 million space-faring civilizations in the galaxy since and we are nothing special at all, on an average star in the middle age of the universe. In the case the filter is sort of ahead of us because we cannot expand and colonize—the much more advanced AI would stop us.
Either way if we make a simple replicator and have it successfully reach another solar system (with possibly habitable planets) then that would seem to demonstrate that the filter is behind us. We would have then done something that we can be sure noone else in the galaxy has done before as I have said we see no evidence of such replicators. I am talking about one that could not land on planets, just rearrange asteroids and similar objects with very low gravity.
Excellent! So, wouldn’t that mean that the best way to eliminate x-risk would be to do exactly that?
It is counterintuitive, because “eliminating x-risk” implies some activity, some fixing of something. But we eliminated the risk of devastating asteroid impact not by nuking any dangerous ones, but by mapping all of them and concluding the risk didn’t exist. As it happens, that was also much cheaper than any asteroid deflection could have been.
If sending out an interstellar replicator was proof we’re further ahead (i.e. less vulnerable) than anything that could have evolved inside this galaxy since the dawn of time, it seems mightily important to become more certain we can do that (without AI). If some variant of our interstellar replicator was capable of enabling intergalactic travel, that’d raise our expectation of comparative invulnerability because we’d know we’ve gone past obstacles that nothing inside some fraction of our light cone even outside our galaxy has been able to master.
Ideally we’d actually demonstrate that of course, but for the purpose of eliminating (perceived) x-risk, a highly evolved and believable model of how it could be done should go much of the way.
Of course we might find out that self-replicating spacecraft are a lot harder than they look, but that too would be information that is valuable for the long-term survival of our species.
Armstrong and Sandberg claim the feasibility of self-replicating spacecraft has been a settled matter since the Freitag design in 1980. But that paper, while impressively detailed and a great read, glosses over the exact computing abilities such a system would need, does not mention hardening against interstellar radiation, and probably has a bunch of other problems that I’m not qualified to discover. I haven’t looked at all the papers that cite it (yet), but the once I’ve seen seem to agree self-replicating spacecraft are plausible.
I posit that greater certainty on that point would be of outsized value to our species. So why aren’t we researching it? Am I overlooking something?
Actually, bacteria, seeds and acorns are our strongest arguments for self-replication, along with the fact that humans can generally copy or co-opt natural processes for our own uses.
Thanks for the comment. Yes I agree that if we had made such a replicator and set it loose then that would say a lot about the filter. To claim that the filter was still ahead of us in that case you would need to make the more bizarre claim that we would with almost 100% probability seek and destroy the replicators and almost all similar civilizations would do the same, then proceed not to expand again.
I am not sure that a highly believable model would go most of the way because there may be a short window between having a model, then AI issues changing things so it isn’t built. It seems pretty believable for the case of mankind that there would be a very short time between building such an thing and going full AI, so to be sure you would actually have to build it and let it loose.
I am not sure why it isn’t given much more attention. Perhaps many people don’t believe that AI can be part of the filter e.g. the site overcomingbias.com. Also I expect there would be massive moral opposition to letting such a replicator loose from some people! How dare we disturb the whole galaxy in such an unintelligent way. Thats why I mention the simple one that just rearranges small asteroids. It would not wipe out life as we know it but would prove that we were past the filter as such a thing has not been done in our galaxy. I sure would be interested in seeing it researched. Perhaps someone with more kudos can promote it?
Likely a replicator would be a consequence of asteroid mining anyway as the best, cheapest way to get materials from asteroids is if it is all automatic.
Why would that be so, especially in the immediate future?
Imagine if we had made a replicator, demonstrated that it could make copies of itself, established with as high confidence as we could that it could survive the trip to another star, and had let >100,000 of the things off heading to all sorts of stars in the neighborhood. They would eventually (very soon compared to a billion years) visit every star in the galaxy and that would tell us a lot about the Fermi paradox and great filter.
As I said before (discounting planetarium hypothesis) we could have a high degree of confidence that the great filter was then behind us. It couldn’t really be the case that thousands of civilizations in our galaxy had done such a thing, then changed their mind and destroyed all the replicators as some civilizations would probably destroy themselves between letting the replicators loose and changing their mind, or not change their mind/not care about the replicators. Therefore we would see evidence of their replicators in our solar system which we don’t see.
The other way we can be sure the filter is behind us is successfully navigate the Singularity (keeping roughly the same values). That seems obviously MUCH more difficult to have confidence in.
If our goal is to make sure the filter is behind us then it is best to do it with a plan we can understand and quantify. Holding off human level AI until the replicators have been let loose seems to be the highest probability way to do that, but no-one has said such a thing before now as far as I am aware.
I still see no answer to my question. Where is the outisize value to our species?
Not to mention that “very soon compared to a billion years” isn’t a particularly interesting time frame.