Once AI is developed, it could “easily” colonise the universe.
I dispute this assumption. I think it is vanishingly unlikely for anything self-replicating (biological, technological, or otherwise) to survive trips from one island-of-clement-conditions (~ ‘star system’) to another.
6 hours of the sun’s energy, or 15 billion years worth of current human energy use (or only a few trillion years human energy use in the early first millennium, it really was not exponential until the 19th/20th century and these days its more linear). The only way you get energy levels that high is with truly enormous stellar-scale engineering projects like Dyson clouds, which we see no evidence of when we look out into the universe in infrared—those are something we would actually be able to see. Again, if things of that sheer scale are something that intelligent systems don’t get around to doing for one reason or another, then this sort of project would never happen.
Additionally, the papers referenced there have ‘seed’ masses sent to other GALAXIES massing grams with black-box arbitrary control over matter and the capacity to last megayears in the awful environment of space. Pardon me if I don’t take that possibility very seriously, and adjust the energy figures up accordingly.
Really, you think that if our civilization survives another million years we won’t be able to do this? At the very least we could freeze human embryos, create robots that turn the embryos into babies and then raise them, put them all on a slow star ship and send the ship to an earth like planet.
It seems like a natural class of explanations for the fermi paradox, one which I am always surprised never gets more people coming up with it. Most people pile into ‘intelligent systems almost never appear’ or ‘intelligent systems have extremely short lifespans’. Why not ‘intelligent systems find it vanishingly difficult to spread beyond small islands’? It seems more reasonable to me than either of the two previous ones, as it is something that we haven’t seen intelligent systems do yet (we are an example of one both arising and sticking around for a long time).
If I must point out more justification than that, I would immediately go with:
1 - All but one of our ships BUILT for space travel that have gone on to escape velocity have failed after a few decades and less than 100 AUs. Space is a hard place to survive in.
2 - All self-replicating systems on earth live in a veritable bath of materials and energy they can draw on; a long-haul space ship has to either use literally astronomical energy at the source and destination to change velocity, or ‘live’ off only catabolizing itself in an incredibly hostile environment for millennia at least while containing everything it needs to set up self-replication in a completely alien environment.
Edit: a friend of mine has brought my attention to this paper:
it proposes a percolation model of interstellar travel in which there is a maximum possible colonization distance and a probability of any successful colonization spawning colonizers themselves. It avoids all three above-posited explanations for the fermi paradox and instead proposes a model of expansion that does not lead to exponential consumption of everything.
1 - All but one of our ships BUILT for space travel that have gone on to escape velocity have failed after a few decades and less than 100 AUs. Space is a hard place to survive in.
Voyagers 1 and 2 were launched in 1977, are currently 218 and 105 AU from the Sun, and are both are still communicating. They were designed to reach Jupiter and Saturn—Voyager 2 had mission extensions to Uranus and Neptune (interestingly, it was completely reprogrammed after the Saturn encounter, and now makes use of communication codes that hadn’t been invented when it was launched).
Pioneers 10 and 11 were launched in 1972 and 73 and remained in contact until 2003 and 1995 respectively, with their failure being due to insufficient power for communication coming from their radioisotope power sources. Pioneer 10 stayed in communication to 80 AU.
New Horizons was launched in 2006 and is still going (encounter with Pluto next year). So, 3 out of 5 probes designed to explore the outer solar system are still going, 2 with 1970s technology.
The voyagers are 128 and 104 AUs out upon me looking them up—looks like I missed Voyager 2 hitting the 100 AU mark about a year and a half ago.
Still get what you are saying. Still not convinced that all that much has been done in the realm of spacecraft reliability recently aside from avoiding moving parts and having lots of redundancy, they have major issues quite frequently. Additionally all outer solar system probes are essentially rapidly catabolizing plutonium pellets they bring along for the ride with effective lifetimes in decades before they are unable to power themselves and before their instruments degrade from lack of active heating and other management that keeps them functional.
Thanks for the link to the paper with the percolation model. I think it’s interesting, but the assumption of independent probabilities at each stage seems relatively implausible. You just need one civilization to hit upon a goal-preserving method of colonisation and it seems the probability should stick high.
Because to creatures such as us that have only been looking for a hundred years with limited equipment, a relatively ‘full’ galaxy would look no different from an empty one.
Consider the possibility that you have about 10,000 intelligent systems that can use radio-type effects in our galaxy (a number that I think would likely be a wild over-estimation given the BS numbers I occasionally half-jokingly calculate given what I know of the evolutionary history of life on Earth and cosmology and astronomy, but it’s just an example). That puts each one, on average, in an otherwise ‘empty’ cube 900 light years on a side that contains millions of stars. EDIT: if you up it to a million intelligent systems, the cube only goes down to about 200 light years wide with just under half a million stars, I just chose 10,000 because then the cube is about the thickness of the galaxy’s disc and the calculation was easy.
We would be unable to detect Earth’s own omnidirectional radio leaks less than a light year away according to figures I have seen, and since omnidirectional signals decrease with the square of distance even to be seen 10 light years away you would need hundreds of times as much. Seeing as omnidirectional radio has decreased with technological sophistication here, I find that doubtful. We probably just would never see omnidirectional signals, and furthermore given the proportionality to the square of distance i doubt anybody would see ours either.
That leaves you looking for directional transmissions. Those might not even be radio, as optical transmissions can be sent directionally and are rather better for very long distances due to diffraction issues, but I will ignore that possibility. That means that you need a directional transmitter, and over the distances we are talking about a directional receiver. Such a directional signal could just randomly be pointed towards us along line of sight to a spacecraft or along a horizon, or it could be specifically sent out to other star systems. What are the odds that two points that do not know of each other’s existence (given that human impacts on the atmosphere of earth are only two to three centuries old and radio even younger) happen to both have the transmitter point in the proper direction and the receiver look in the proper direction at the exact right moment?
In short, the only things we have really excluded reliably so far are truly huge engineering projects like dyson clouds (which you would see in the infrared), ridiculously powerful omnidirectional signals of a sort I find unlikely in our immediate neighborhood (tens of lightyears), or something that for some reason decides to spend a lot of time and effort pinging millions of nearby stars every few years or less for quite a large fraction of its history. We’ve sent out what, a dozen or two directional beams to other stars over half a century?
Factories can already self-replicate given human help. They’ll probably be able to do it on their own inside the next two decades. After that, we’re looking at self-replicating swarms of drones that tend to become smaller and smaller, and eventually they’ll fit on a spaceship and spread across the galaxy like a fungus, eating planets to make more drones.
That doesn’t strictly require AGI, but AGI would have no discernible reason not to do that, and this has evidently not happened because we’re here on this uneaten planet.
Once AI is developed, it could “easily” colonise the universe.
I also see this claimed often but my best guess also is that this might likely be the hard part. Getting into space is already hard. Fusion could be technologically impossible (or not energy positive).
Thank you. An interesting read. I found your treatment very thorough given its premises and approach.
Sadly we disagree at a point which you seem to take as given without further treatment but which I question:
The ability and energy to set-up infrastructure to exploit interplanetary resources with sufficient net energy gain to sufficiently mine mercury (much less build a dyson sphere).
The problem here is that I do not have refereneces to actually back my opinion on this and I didn’t have enough time yet to build my complexity theoretic and thermodynamics arguments into a sufficiently presentable form.
I dispute this assumption. I think it is vanishingly unlikely for anything self-replicating (biological, technological, or otherwise) to survive trips from one island-of-clement-conditions (~ ‘star system’) to another.
Nick Beckstead wrote up an investigation into this question, with the conclusion that current consensus points to it being possible.
http://lesswrong.com/lw/hll/to_reduce_astronomical_waste_take_your_time_then/ : six hours of the sun’s energy for every galaxy we could ever reach, at a redundancy of 40. Give a million years, we can blast a million probes per star at least. Some will get through.
6 hours of the sun’s energy, or 15 billion years worth of current human energy use (or only a few trillion years human energy use in the early first millennium, it really was not exponential until the 19th/20th century and these days its more linear). The only way you get energy levels that high is with truly enormous stellar-scale engineering projects like Dyson clouds, which we see no evidence of when we look out into the universe in infrared—those are something we would actually be able to see. Again, if things of that sheer scale are something that intelligent systems don’t get around to doing for one reason or another, then this sort of project would never happen.
Additionally, the papers referenced there have ‘seed’ masses sent to other GALAXIES massing grams with black-box arbitrary control over matter and the capacity to last megayears in the awful environment of space. Pardon me if I don’t take that possibility very seriously, and adjust the energy figures up accordingly.
Really, you think that if our civilization survives another million years we won’t be able to do this? At the very least we could freeze human embryos, create robots that turn the embryos into babies and then raise them, put them all on a slow star ship and send the ship to an earth like planet.
I think it’s quite unlikely, yes.
It seems like a natural class of explanations for the fermi paradox, one which I am always surprised never gets more people coming up with it. Most people pile into ‘intelligent systems almost never appear’ or ‘intelligent systems have extremely short lifespans’. Why not ‘intelligent systems find it vanishingly difficult to spread beyond small islands’? It seems more reasonable to me than either of the two previous ones, as it is something that we haven’t seen intelligent systems do yet (we are an example of one both arising and sticking around for a long time).
If I must point out more justification than that, I would immediately go with:
1 - All but one of our ships BUILT for space travel that have gone on to escape velocity have failed after a few decades and less than 100 AUs. Space is a hard place to survive in.
2 - All self-replicating systems on earth live in a veritable bath of materials and energy they can draw on; a long-haul space ship has to either use literally astronomical energy at the source and destination to change velocity, or ‘live’ off only catabolizing itself in an incredibly hostile environment for millennia at least while containing everything it needs to set up self-replication in a completely alien environment.
Edit: a friend of mine has brought my attention to this paper:
http://www.geoffreylandis.com/percolation.htp
it proposes a percolation model of interstellar travel in which there is a maximum possible colonization distance and a probability of any successful colonization spawning colonizers themselves. It avoids all three above-posited explanations for the fermi paradox and instead proposes a model of expansion that does not lead to exponential consumption of everything.
Voyagers 1 and 2 were launched in 1977, are currently 218 and 105 AU from the Sun, and are both are still communicating. They were designed to reach Jupiter and Saturn—Voyager 2 had mission extensions to Uranus and Neptune (interestingly, it was completely reprogrammed after the Saturn encounter, and now makes use of communication codes that hadn’t been invented when it was launched).
Pioneers 10 and 11 were launched in 1972 and 73 and remained in contact until 2003 and 1995 respectively, with their failure being due to insufficient power for communication coming from their radioisotope power sources. Pioneer 10 stayed in communication to 80 AU.
New Horizons was launched in 2006 and is still going (encounter with Pluto next year). So, 3 out of 5 probes designed to explore the outer solar system are still going, 2 with 1970s technology.
The voyagers are 128 and 104 AUs out upon me looking them up—looks like I missed Voyager 2 hitting the 100 AU mark about a year and a half ago.
Still get what you are saying. Still not convinced that all that much has been done in the realm of spacecraft reliability recently aside from avoiding moving parts and having lots of redundancy, they have major issues quite frequently. Additionally all outer solar system probes are essentially rapidly catabolizing plutonium pellets they bring along for the ride with effective lifetimes in decades before they are unable to power themselves and before their instruments degrade from lack of active heating and other management that keeps them functional.
Thanks for the link to the paper with the percolation model. I think it’s interesting, but the assumption of independent probabilities at each stage seems relatively implausible. You just need one civilization to hit upon a goal-preserving method of colonisation and it seems the probability should stick high.
OK, but even if you are right we know it’s possible to send radio transmissions to other star systems. Why haven’t we detected any alien TV shows?
Because to creatures such as us that have only been looking for a hundred years with limited equipment, a relatively ‘full’ galaxy would look no different from an empty one.
Consider the possibility that you have about 10,000 intelligent systems that can use radio-type effects in our galaxy (a number that I think would likely be a wild over-estimation given the BS numbers I occasionally half-jokingly calculate given what I know of the evolutionary history of life on Earth and cosmology and astronomy, but it’s just an example). That puts each one, on average, in an otherwise ‘empty’ cube 900 light years on a side that contains millions of stars. EDIT: if you up it to a million intelligent systems, the cube only goes down to about 200 light years wide with just under half a million stars, I just chose 10,000 because then the cube is about the thickness of the galaxy’s disc and the calculation was easy.
We would be unable to detect Earth’s own omnidirectional radio leaks less than a light year away according to figures I have seen, and since omnidirectional signals decrease with the square of distance even to be seen 10 light years away you would need hundreds of times as much. Seeing as omnidirectional radio has decreased with technological sophistication here, I find that doubtful. We probably just would never see omnidirectional signals, and furthermore given the proportionality to the square of distance i doubt anybody would see ours either.
That leaves you looking for directional transmissions. Those might not even be radio, as optical transmissions can be sent directionally and are rather better for very long distances due to diffraction issues, but I will ignore that possibility. That means that you need a directional transmitter, and over the distances we are talking about a directional receiver. Such a directional signal could just randomly be pointed towards us along line of sight to a spacecraft or along a horizon, or it could be specifically sent out to other star systems. What are the odds that two points that do not know of each other’s existence (given that human impacts on the atmosphere of earth are only two to three centuries old and radio even younger) happen to both have the transmitter point in the proper direction and the receiver look in the proper direction at the exact right moment?
In short, the only things we have really excluded reliably so far are truly huge engineering projects like dyson clouds (which you would see in the infrared), ridiculously powerful omnidirectional signals of a sort I find unlikely in our immediate neighborhood (tens of lightyears), or something that for some reason decides to spend a lot of time and effort pinging millions of nearby stars every few years or less for quite a large fraction of its history. We’ve sent out what, a dozen or two directional beams to other stars over half a century?
You’re forgetting self-replicating colony ships.
Factories can already self-replicate given human help. They’ll probably be able to do it on their own inside the next two decades. After that, we’re looking at self-replicating swarms of drones that tend to become smaller and smaller, and eventually they’ll fit on a spaceship and spread across the galaxy like a fungus, eating planets to make more drones.
That doesn’t strictly require AGI, but AGI would have no discernible reason not to do that, and this has evidently not happened because we’re here on this uneaten planet.
I also see this claimed often but my best guess also is that this might likely be the hard part. Getting into space is already hard. Fusion could be technologically impossible (or not energy positive).
Fusion is technologically possible (c.f., the sun). It just might not be technologically easy.
The sun is not technology (=tools, machinery, modifications, arrangements and procedures)
It seems like there is steady progress at the fusion frontiers
Though in the case of ITER the “steady progress” is finishing pouring concrete for the foundations, not tweaking tokamak parameters for higher gain!
Fission is sufficient.
Is this an opinion or a factual statement. If the latter I’d like to see some refs.
http://www.fhi.ox.ac.uk/wp-content/uploads/intergalactic-spreading.pdf
Thank you. An interesting read. I found your treatment very thorough given its premises and approach. Sadly we disagree at a point which you seem to take as given without further treatment but which I question:
The ability and energy to set-up infrastructure to exploit interplanetary resources with sufficient net energy gain to sufficiently mine mercury (much less build a dyson sphere).
The problem here is that I do not have refereneces to actually back my opinion on this and I didn’t have enough time yet to build my complexity theoretic and thermodynamics arguments into a sufficiently presentable form.
http://lesswrong.com/lw/ii5/baseline_of_my_opinion_on_lw_topics/
We already have solar panel setups with roughly the required energy efficiency.