The mere ability to hurl things into space doesn’t reduce existential risk at all. The only thing that would do that is the ability to create an independently self-sustaining economy in space. But we are so very far away from that, cheaper space-flight just isn’t of much help now. Far better to just grow the world economy and tech-base faster, then make cheaper space flight when we are nearer the point where an independent space economy is feasible.
Note that a moon/mars base wouldn’t have to produce everything it consumed; there could be some things that just last a long time, like the terrapower nuclear reactor, or containment domes that naturally last a long time, or large stores of food or chemicals that just sit on the moon for a long time. Most importantly for Mars, the effort put into warming the planet and finding suitable synthetic life-forms to convert the atmosphere would be a one-off investment that would pay returns forever.
The moon/mars base could ride out a nuclear winter, spend decades finding a cure to a bioengineered virus, and maybe even find a highly effective blue-goo to fight grey goo (though this last one is admittedly much harder, but 2 out of 3 ain’t bad).
I’m going to tech-nerd out and elaborate on some of the things you said. This is a joyous thing, so thanks for the opportunity. ;-)
like the terrapower nuclear reactor
You can get much the same effect with any breeder reactor; indeed, if you’re sending it to the moon or mars, a LFTR would probably be a better investment. But either one works.
or containment domes that naturally last a long time
These are a very reasonable thing to expect. For building on the moon or mars with native materials, the easiest thing to do is form it into bricks and build masonry structures. Arches and domes are not only easy structures to make from bricks, but they are extraordinarily stable structures, capable of remaining in place even after taking considerable damage and wear.
Plus, on the moon you would probably build very thick domes (or half-cylinders) to get enough radiation shielding. Those things would naturally be very strong.
I agree with Robin, and underground refuges do compete with space, in our advocacy/attention if nothing else. Heck if one is keen on exploiting the moon-landing legacy NASA budget, push for more Biosphere 2 type projects nominally in preparation for space travel.
Worried about being on the other side of the debate than both Robin and Carl.
I guess I was thinking of Nick Bostrom giving a speech praising the existing private space industry, and that adding some legitimacy to the claim that private spaceflight is for the greater good. In fact exactly this mechanism (with Stephen Hawking advocating instead of Nick) is actually contributing to the resurgence of space that we do have.
This mechanism is cheap, and it diverts resources from places where they clearly do absolutely no good for existential risks, to somewhere where they do some small amount of good.
You could also advocate the construction of an underground shelter, but as others have commented, this has emotional connotations of selfishness, so although you get more risk reduction per unit money, you get less per unit advocacy (maybe).
Heck if one is keen on exploiting the moon-landing legacy NASA budget, push for more Biosphere 2 type projects nominally in preparation for space travel.
Programs of that sort are generally not self-sufficient and isolated enough to substantially reduce existential risk. For example, a gray goo scenario will hit those about as hard as it hits anywhere else. And such programs are rarely long-term enough to be able to remain isolated for long if normal infrastructure gives out.
To argue that we shouldn’t devote some resources to it, I think it would be necessary to argue that the disadvantages outweigh the advantages. Arguing that the advantages are relatively small doesn’t really cut it when the future of civilisation is at stake.
Arguing that the advantages are relatively small doesn’t really cut it when the future of civilisation is at stake.
Yes it does. That advantages are relatively small (as compared to other existential risk reduction plans) is meaningful, since it suggests reallocation of resources. Saying that we can’t compromise because “the future of civilization is at stake” invites stupidity.
But the comparison to other existential risk reduction plans is not the right comparison. We should compare the other uses to which the resources will likely be put. Those usually won’t be existential risk reduction projects.
That’s what always gets me about policy debates. If we’re debating what an LW member who gets put in charge of the national budget should do, Nesov has it. If asking what every LW member should vote for if a referendum specifically on “allocate billions to asteroid defense” comes up, torekp is correct. I am annoyed by disagreements between people who actually agree which take this form.
So, the case you are apparently attempting to make is that all resources that could be spent on asteroid deflecting would be better spent on other things. Maybe—but that is far-from obvious. Here is what is currently happening:
I’m not attempting to make that case—at some point (sufficiently low amount of resources) marginal worth of asteroid-avoidance might become competitive.
Right—OK—that’s what I was saying. Some people are space cadets—and I figure some of them can probably make useful contributions.
Space has some other possibilities for reducing risks too. For example, communications satellites network the world, make everyone friends—and reduce the chances of war. Of course there’s also star wars—but I don’t think that space can be simply written off as not helping.
Asteroids larger than 1 km hit the Earth about every 500,000 years. (source). That’s in the large-scale devastation but not extinction range. Indeed, even asteroids a few 10s or 100s of meters can cause major devastation. The object that caused the Tunguska event is estimated to be between 50-80 meters , and such impacts occur every few hundred years or. Historically such events have had minimal loss of human life, but that’s partially due to much less of Earth being populated by humans than it is now. So even without worrying about existential level threats, asteroid impacts pose a substantial risk to human life. As the population grows that risk will become more severe.
How frequent are extinction level asteroid collisions? There’s some disagreement
there but the ratio seems to be between about 1 per 40-200 million years. That seems plausibly like a low risk existential threat, but how does one compare other existential risks? How does that compare to the chance of say global thermonuclear war or the probability of a uFAI arising? If one presents a very low probability on a uFAI, or put a low probability not on a uFAI but on an AI going FOOM, then this becomes potentially more relevant.
Note also that one doesn’t really need an existential level asteroid impact to permanently ruin human life. If we use up enough resources on Earth, especially fossil fuels, then it may not be possible to bootstrap ourselves back up to modern tech levels if the tech level is substantially reduced. As we use more limited resources this risk becomes more serious. We’re not anywhere near using up the deuterium supply, but that’s also limited as is the supply of U-235 which is much closer to depletion (although again, not very close). This permanent resource crunch after a major civilization setback is enough of a risk that Nick Bostrom takes it seriously (see first link given above.) An asteroid in the 3-5 km range if it hit in a bad way could cause this sort of scenario.
The main problem with the asteroid type event is that there’s very little we can do about it. Breaking up an asteroid into little parts won’t actually do much if they still impact Earth since the total kinetic velocity delivered is still about the same. There’s more of a chance at potentially redirecting an asteroid if one attaches a solar sail or a large nuke at just the right spot. But all such options require knowing about the threat a while in advance.
There are two other point that supports a space program as a existential risk reducer which Timtyler didn’t touch on but are worth bringing up: 1) Even if we can’t construct self-sustaining colonies yet, every bit we go in that direction increases the chance that we will be able to have such colonies before any event occurs that wipes out or substantially reduces human life on Earth. 2) There are many space based extinction threats other than rogue asteroids where having advance warning even by a few days or hours could substantially reduce the risk. These include supernova risks primarily from IK Pegasi A and Betelgeuse. Our current estimates put both of these as low probability events. Under current estimates Betelgeuse is too far away given the predicted supernova size. But there’s a not insignificant chance that our models are wrong and even a small bit off could substantially ruin our day. IK Pegasi A is close enough that if it went through a a Type Ia supernova now (well a 150 years ago), it would easily be an extinction level event. It is likely that the star will go through such at some point in the future, but current estimates put that a few million years in the future. But again, modeling issues could make this drastically wrong (although the chance of a modeling error is much smaller than with Betelgeuse.) Then there are other more exotic and as yet difficult to estimate threats such as gamma ray bursts and rogue brown dwarfs.
Where are you getting your estimates of risk probability from? If by Nano you mean a nanotech gray goo scenario, then frankly that seems much less likely than 1⁄5000 in the next century. People who actually work with nanotech put that sort of scenario as extremely unlikely for a variety of reasons, including that there’s too much variation in common chemical compounds to be able to make nanotech devices which acted as universal assimilators, and there’s no clear way that such entities are going to get efficient energy resources to do so. Now, one might be able to argue that very intelligent AI might be able to solve those problems, but in that case, you’re talking about just the AI problem and nanotech becomes incidental to that.
I’m not sure what you mean by “bio”- but if you mean biological threats then this seems unlikely to be an existential level threat for the simple reason that we can see that it is very rare for a species to be wiped out by a pathogen. We might be able to make a deliberately dangerous pathogen but that requires motivation and expertise. The set of people with both the desire and capability to construct such entities is likely small, and will likely remain small for the indefinite future.
I assume “Bio, Nano, AI” to mean “any global existential threats brought on by human technology”, which is a big disjunction with plenty of unknown unknowns, and we already have one example (nuclear weapons) that could not have plausibly been predicted 50 years beforehand. Even if you discount the probabilities of hard AI takeoff or nanotech development, you’d have to have a lot of evidence in order to put such a small probability on any technological development of the next hundred years threatening global extinction.
As someone who does largely discount the threats mentioned (I believe that the operationally-significant probability for foom/grey goo is order 10^-3/10^-5, and the best-guess probability is order 10^-7/10^-7), I still endorse the logic above.
Er, maybe I was being unclear. Even if you discount a few specific scenarios, where do you get the strong evidence that no other technological existential risk with probability bigger than .001 will arise in the next hundred years, given that forecasters a century ago would have completely missed the existential risk from nuclear weapons?
I agree that cataloging near-earth objects is obviously worth a much bigger current investment than it has at present, but I think that an even bigger need exists for a well-funded group of scientists from various fields to consider such technological existential risks.
If I wanted to exterminate the human race using nanotechnology, there are two methods I would think about. First method, airborne replicators which use solar power for energy and atmospheric carbon dioxide for feedstock. Second method, nanofactories which produce large quantities of synthetic greenhouse gases. Under the first method, one should imagine a cloud of nanodust that just keeps growing until most of the CO2 is used up (at which point all plants die). Under the second method, the objective is to heat the earth until the oceans boil.
For the airborne replicator, the obvious path is “diamondoid mechanosynthesis”, as described in papers by Drexler, Merkle, Freitas and others. This is the assembly of rigid nanostructures, composed mostly of carbon atoms, through precisely coordinated deposition of small reactive clusters of atoms. To assemble diamond in this way, one might want a supply of carbon chains, which remain sequestered in narrow-diameter buckytubes until they are wanted, with the buckytubes being positioned by rigid nanomechanisms, and the carbon chains being synthesized through the capture and “cracking” of CO2 much as in plants. The replicator would have a hard-vacuum interior in which the component assembly of its progeny would occur, and a sliding or telescoping mechanism allowing temporary expansion of this interior space. The replicator would therefore have at least two configurations: a contracted minimal one, and an expanded maximal one large enough to contain a new replicator assembled in the minimal configuration.
There are surely hundreds or thousands of challenging subproblems involved in the production of such a nanoscale doomsday device—power supply, environmental viability (you would want it to disperse but to remain adrift), what to do with contaminants, to say nothing of the mechanisms and their control systems—but it would be a miracle if it was literally thermodynamically impossible to make such a thing. Cells do it, and yes they are aqueous bags of floppy proteins rather than evacuated diamond mechanisms, but I would think that has more to do with the methods available to DNA-based evolution, rather than the physical impossibility of free-living rigid nanobots. The Royal Society report to which you link hardly examines this topic. It casually cites a few qualitative criticisms made by Smalley and others, and attaches some significance to a supposed change of heart by Drexler—but in fact, Drexler simply changed his emphasis, from accident to abuse. There is no reason to expect free-living rogue replicators to emerge by accident from nanofactories, because such industrial assemblers will be tailored to operate under conditions very different to the world outside the factory. But there has been no concession that free-living nanomechanical replicators are simply impossible, and people like Freitas and Merkle who continue to work on the details of mechanosynthesis have many time expressed the worry that it looks alarmingly easy (relatively speaking) to design such devices.
As for my second method, you don’t even need free-living replicators, just mass production of the greenhouse-gas nanofactories, and a supply of appropriate ingredients.
I’m not sure if this counts as an existential threat, but I’m more concerned about a biowar wrecking civilization—enough engineered human and food diseases that civilization is unsustainable.
I can’t judge likelihood, but it’s at least a combination of plausible human motivations and technology. Your tech is plausible, but it’s hard to imagine anyone wanting not just to wipe out the human race, but also to do such damage to the biosphere.
There are a few people who’d like the human race to be gone (or at least who say they do), but as far as I know, they all want plants and animals to continue without being affected by people.
There are definitely people who would destroy the whole world if they could. Berserkers, true nihilists, people who hate life, people who simply have no empathy, dictators having a bad day. Even a few dolorous “negative utilitarians” exist who might do it as an act of mercy. But the other types are surely more numerous.
Massive overconfidence. You need to go closer to 50⁄50.
Where is your estimate coming from?
My estimate comes from the following: 1) experts suggest that the possibility is very unlikely. For example, the Royal Society official report on the dangers of nanotech decided that this sort of situation was extremely unlikely. See report here (and good Bayesians should listen to subject matter experts) 2) Every plausible form of nanotech yet investigated shows no capability of gray gooing. For example, consider DNA nanotechnology, an area we’ve had a fair bit of success both with computation and constructing machines. Yet, these work only in a small range of pH values and temperatures and often require specific specialized enzymes. Also, as with any organic nanotech, they will face competition and potentially predation from microorganisms. Inorganic nanotech faces other problems, such as less energy and far fewer options for possible chemical constructions, and already reduces the grey goo potential a lot if one isn’t using carbon.
1) experts suggest that the possibility is very unlikely.
But how did you translate “very unlikely” to “less that 1 in 5000″? Why not say 1%? or 3%? Or 1 in 10^100?
I think that I need to do an article on why one shouldn’t be so keen to assign very low probabilities to events where the only evidence is extrapolative.
Unfortunately, you often have to rule intuitively. How does complexity figure in the estimation of probability of gray goo? Useful heuristic, but no silver bullet.
I think that one has to differentiate between the perfect unbiased individual rationalist who uses heuristics but ultimately makes the final decision from first principles if necessary, and the semi-rationalist community, where individual members vary in degree of motivated cognition.
The latter works better with more rigid rules and less leeway for people to believe what they want. It’s a tradeoff: random errors induced by rough-and-ready estimates, versus systematic errors induced by wishful thinking of various forms.
Less than 1 in 5000 sounds about right to me. I’m much more worried about other nano-dangers (e. g. clandestine brain washing) than grey goo.
Not only is there the problem of the technological feasibility, but even if its possible there is the still larger problem of economic feasibility. Molecular von Neumann Machines, if possible, should be vastly more difficult to develop than vastly more efficient static nano-assemblers in a controlled environment (probably vacuum?) and integrated in an economy with mixed nano- and macrotech taking advantage of specialization, economics of scale etc. The static nano-assemblers should already be ubiquitous long before molecular von Neumann Machines start to become feasible. So why develop them in the first place? For medical applications specialized medical nanobots running on glucose and cheaply mass-produced in the static nano-assemblers should also beat them. They’d be useful in space and for sending to other planets, but there wouldn’t be all that much money in that, and sending a larger probe with nano-assemblers and assorted equipment would also do.
Since there would be no overwhelming incentive against outlawing the development of MvNM doing so would be feasible, and considering how easy it should be to scare people of the gg scenario in such a world, very likely.
That pretty much leaves secret development as some sort of weapon. That would leave gg defense a military issue. Nano-assemblers should be much better at producing nano-hunters and nano-killers (or more assemblers, mining equipment, planes, rockets, bombs) than MvNM more of themselves, and nano-hunters and nano-killers much better at finding and destroying them, and there’d also be the option of using macroscopic weapons against larger concentrations.
The original discussion was not concerned with the dangers of grey goo per se, but with any extinction risk associated with nanotech. Remember, the original question, the point of the discussion, was whether asteroids were irrelevant as an x-risk.
So whilst you make good points, it seems that we now have a lost-purpose debate rather than a purposeful collaborative discussion.
Other nano-risks aren’t necessarily extinction risks, though. And while I’m sort of worried that someone might secretly use nano to rewire the brains of important people and later of everyone to absolute loyalty to them (an outcome that would be a lot better than extinction, but still pretty bad) or something along those lines it doesn’t seem obvious that there is anything effective we could spend money on now that would help protect us, unlike asteroids. At least at the levels of spending asteroid danger prevention could usefully absorb.
But now you have to catalogue all the possible risks of nanotech, and add a category for “risks I haven’t thought of”, and then claim that the total probability of all that is < 1⁄5000.
You have to consider military nanotech. You have to consider nano-terrorism and the balance of attack versus defence, you have to consider the effects of nanotech on nuclear proliferation (had you thought of that one?), etc etc etc.
I am sure that there are at least 3 nano-risk scenarios documented on the internet that you haven’t even thought of, which instantly invalidates claiming a figure as low as, say, 1⁄5000 for the extinction risk before you have considered them.
This argument reminds me of the case of physicists claiming to have an argument showing that the probability of an LHC disaster was less than 1 in a million, and Toby Ord pointing out that the probability that there was a mistake in their argument was surely > 1 in 1,000, invalidating their conclusion that total probability of an LHC disaster was < 1 in 1 million.
But now you have to catalogue all the possible risks of nanotech, and add a category for “risks I haven’t thought of”, and then claim that the total probability of all that is < 1⁄5000.
The question wasn’t whether nanotech is potentially more dangerous than asteroids overall, though. It was whether all money available for extential risk prevention/migitation would be better spend on nano than on space based dangers.
There doesn’t seem to be any good way to spent money so that all possible nano risks will be migitated (other than lobbying to ban all nano reseach everywhere, and I’m far from convinced that the potential dangers of nano are greater than the benefits). I’m not even sure there is a good way to spend money on migitation of any single nano risk.
The most obvious migitation/prevention technology would be really good detectors for autonomous nanobots, whether self reproducing or not. But until we know how they work and what energy source they use we can’t do all that much useful research in that direction, and spending after we know what we need would probably be much more efficient. This also looks like an issue where the military will spend such enourmous amounts once the possibilities are clear that money spent previously will not affect the result all that much.
you have to consider the effects of nanotech on nuclear proliferation (had you thought of that one?)
Yes, I did, that’s one of the most ovious ones.
It’s not going to be possible to prevent a nation with access to unranium from building nuclear weapons, but I think that would be the case anyway, with or without nano. The risk of private persons building them might be somewhat increased. I’m not sure if there is any need to seperate isotopes in whatever machines pre-process materials in/for nano-assemblers, or if they lead themself to be modifiable for that. Assuming they do you’d need to look at anyone who processes large amounts of sea water, or any other material that contains uranium. Perhaps you could mandate that only designs that are vulnerable to radioactivity can be sold commercially, or make the machines refuse to work with uranium in a way that is hard to remove. I don’t see how spending money now could help in any way.
This argument reminds me of the case of physicists claiming to have an argument showing that the probability of an LHC disaster was less than 1 in a million, and Toby Ord pointing out that the probability that there was a mistake in their argument was surely > 1 in 1,000, invalidating their conclusion that total probability of an LHC disaster was < 1 in 1 million.
I ’m not sure the probability of a serious error in the best avaiable argument against something can be considered a lower bound to the proability you should assign it in general. In the case of the LHC if there is a 1 in 20 chance of a mistake that doesn’t really change the conclusion much, a 1 in 100 chance of a mistake such that the real probablility is 1 in 100,000, and a 1 in 10,000 chance of a mistake such that the real probablility is 1 in 1000 then 1 in a million could still be roughly the correct estimate.
But now you have to catalogue all the possible risks of nanotech, and add a category for “risks I haven’t thought of”, and then claim that the total probability of all that is < 1⁄5000
The 1⁄5000 number only works for the really large asteroids (> 1 km in diameter). Note that as I pointed earlier, much smaller asteroids can be locally devastating. The resources that go to finding the very large asteroids also helps track the others, reducing the chance of human life lost even outside existential risk scenarios. And as I pointed out, there are a lot of other potential space based existential risks. That said, I think you’ve made a very good point above about the many non-gray goo scenarios that make nanotech a severe potential existential risk. So I think I’ll agree that if one’s comparing probability of a nanotech existential risk scenario compared to probability of a meteorite existential risk scenario, the nanotech is more likely.
Your point about the impact of nanotech on nuclear proliferation I find particularly disturbing. The potential for nanotech to greatly increase the efficiency of enriching uranium seems deeply worrisome and that’s really the main practical limitation in building fission weapons.
Upvoted for updating. I agree that smaller asteroids are an important consideration for space; we expect about one Tunguska event per century I believe, which stands a ~5% chance of hitting a populated area as far as I know. Saving a 5% chance of the next Tunguska hitting a populated area is a good thing.
Accidental grey goo doesn’t seem plausible, and purposeful destructive use of nanotech doesn’t necessarily fall in that category. We can have nanomachines that act as bioweapons, infecting people and killing them.
Are you disagreeing with something I said? I’m not sure nanotech would be better at killing that way than a designer virus, which should be a lot easier and cheaper (possibly even when accounting for the need to find a way to prevent it from spreading to your own side, if that’s necessary). Nanotech might be able to do things that a virus can’t, but that would be the sort of thing I mentioned. Anyway I don’t see how we could effectively spend money now to prevent either.
1) Even if we can’t construct self-sustaining colonies yet, every bit we go in that direction increases the chance that we will be able to have such colonies before any event occurs that wipes out or substantially reduces human life on Earth.
It’s not generally valid, since this diverts resources from development of other potentially relevant tech that could help with establishing a colony once the time is right.
There are many space based extinction threats other than rogue asteroids where having advance warning even by a few days or hours could substantially reduce the risk. These include supernova risks primarily from IK Pegasi A and Betelgeuse
No. We know that there are changes in a star before the supernova occurs. For example, in a Type II supernova, the radiation level initially increases linearly. For other supernova types the luminosity of the star does sometimes increase before the supernova event itself. Also, hours before a supernova, there may be a drastic increase in neutrino production.
It is also likely that more detailed observation of stars will give us a better idea what sort of more subtle signs show up prior to supernovae.
Think about it. You observe changes in a star 8 light-hours away from earth, and radio your observations back. What speed do the radio waves travel at? c. What speed does the light bearing the original observation travel at? c. What speed does the supernova blast travel at? also c. Neutrinos travel so close to c it makes no difference.
Think about it. You observe changes in a star 8 light-hours away from earth, and radio your observations back. What speed do the radio waves travel at? c. What speed does the light bearing the original observation travel at? c. What speed does the supernova blast travel at? also c. Neutrinos travel so close to c it makes no difference.
If observed changes to a star happen well before the supernova event itself then the fact that everything is happening at c doesn’t matter. Say for example that the neutrino flux increase happens 24 hours before hand. That means we have a 24 hour warning before the supernova event. Similarly, if we see an increase in luminosity before the supernova we still get advance warning. What matters is that there is a delay between when stars show signs of supernovaing and when they actually supernova.
The point is that being closer to the star when that happens doesn’t provide you with more forewarning than if you look at it from home.
I don’t think anyone is advocating that we send actual probes to Betelgeuse or IK Pegasi. I’m confused why one would think that would even be on the table. Even if we sent a probe today at a tenth of the speed of light (well beyond our current capabilities) it will still take around 1500 years to get to IK Pegasi. I don’t know why one would even think that would be at all in the useful category.
What is helpful is having more space based observation equipment in our solar system. The more we put into space the less of a problem we have with atmospheric interference, artificial radio sources, and general light pollution. To use one specific example that would help a lot, if we had a series of optical telescopes that were spread out around the solar system we could use parallax measurements to get a better idea how far away Betelgeuse is. For a variety of reasons there’s a lot of uncertainty about how far away it is with 330 light years as a lower estimate and around 700 as an upper estimate although it seems like around 640 is where things seem to be settling down at. Given the inverse square law for radiation, this matters for a supernova concern. A difference of 300 light years corresponds to about a factor of 4 in the radiation strength. Overall, most of the interesting, practical investigation and reduction of astronomical existential risks can be done right here in our home system.
The mere ability to hurl things into space doesn’t reduce existential risk at all. The only thing that would do that is the ability to create an independently self-sustaining economy in space. But we are so very far away from that, cheaper space-flight just isn’t of much help now. Far better to just grow the world economy and tech-base faster, then make cheaper space flight when we are nearer the point where an independent space economy is feasible.
Note that a moon/mars base wouldn’t have to produce everything it consumed; there could be some things that just last a long time, like the terrapower nuclear reactor, or containment domes that naturally last a long time, or large stores of food or chemicals that just sit on the moon for a long time. Most importantly for Mars, the effort put into warming the planet and finding suitable synthetic life-forms to convert the atmosphere would be a one-off investment that would pay returns forever.
The moon/mars base could ride out a nuclear winter, spend decades finding a cure to a bioengineered virus, and maybe even find a highly effective blue-goo to fight grey goo (though this last one is admittedly much harder, but 2 out of 3 ain’t bad).
I’m going to tech-nerd out and elaborate on some of the things you said. This is a joyous thing, so thanks for the opportunity. ;-)
You can get much the same effect with any breeder reactor; indeed, if you’re sending it to the moon or mars, a LFTR would probably be a better investment. But either one works.
These are a very reasonable thing to expect. For building on the moon or mars with native materials, the easiest thing to do is form it into bricks and build masonry structures. Arches and domes are not only easy structures to make from bricks, but they are extraordinarily stable structures, capable of remaining in place even after taking considerable damage and wear.
Plus, on the moon you would probably build very thick domes (or half-cylinders) to get enough radiation shielding. Those things would naturally be very strong.
I agree with Robin, and underground refuges do compete with space, in our advocacy/attention if nothing else. Heck if one is keen on exploiting the moon-landing legacy NASA budget, push for more Biosphere 2 type projects nominally in preparation for space travel.
Worried about being on the other side of the debate than both Robin and Carl.
I guess I was thinking of Nick Bostrom giving a speech praising the existing private space industry, and that adding some legitimacy to the claim that private spaceflight is for the greater good. In fact exactly this mechanism (with Stephen Hawking advocating instead of Nick) is actually contributing to the resurgence of space that we do have.
This mechanism is cheap, and it diverts resources from places where they clearly do absolutely no good for existential risks, to somewhere where they do some small amount of good.
You could also advocate the construction of an underground shelter, but as others have commented, this has emotional connotations of selfishness, so although you get more risk reduction per unit money, you get less per unit advocacy (maybe).
Programs of that sort are generally not self-sufficient and isolated enough to substantially reduce existential risk. For example, a gray goo scenario will hit those about as hard as it hits anywhere else. And such programs are rarely long-term enough to be able to remain isolated for long if normal infrastructure gives out.
Yes, I agree.
We can’t colonise other habitats just yet—but we could get into a better position to punch out incoming meteorites.
This risk is relatively insignificant.
To argue that we shouldn’t devote some resources to it, I think it would be necessary to argue that the disadvantages outweigh the advantages. Arguing that the advantages are relatively small doesn’t really cut it when the future of civilisation is at stake.
Yes it does. That advantages are relatively small (as compared to other existential risk reduction plans) is meaningful, since it suggests reallocation of resources. Saying that we can’t compromise because “the future of civilization is at stake” invites stupidity.
But the comparison to other existential risk reduction plans is not the right comparison. We should compare the other uses to which the resources will likely be put. Those usually won’t be existential risk reduction projects.
Who is this argument supposed to be addressed to?
That’s what always gets me about policy debates. If we’re debating what an LW member who gets put in charge of the national budget should do, Nesov has it. If asking what every LW member should vote for if a referendum specifically on “allocate billions to asteroid defense” comes up, torekp is correct. I am annoyed by disagreements between people who actually agree which take this form.
So, the case you are apparently attempting to make is that all resources that could be spent on asteroid deflecting would be better spent on other things. Maybe—but that is far-from obvious. Here is what is currently happening:
http://en.wikipedia.org/wiki/Asteroid_impact_avoidance
I’m not attempting to make that case—at some point (sufficiently low amount of resources) marginal worth of asteroid-avoidance might become competitive.
Right—OK—that’s what I was saying. Some people are space cadets—and I figure some of them can probably make useful contributions.
Space has some other possibilities for reducing risks too. For example, communications satellites network the world, make everyone friends—and reduce the chances of war. Of course there’s also star wars—but I don’t think that space can be simply written off as not helping.
Agreed
Is it that insignificant?
Asteroids larger than 1 km hit the Earth about every 500,000 years. (source). That’s in the large-scale devastation but not extinction range. Indeed, even asteroids a few 10s or 100s of meters can cause major devastation. The object that caused the Tunguska event is estimated to be between 50-80 meters , and such impacts occur every few hundred years or. Historically such events have had minimal loss of human life, but that’s partially due to much less of Earth being populated by humans than it is now. So even without worrying about existential level threats, asteroid impacts pose a substantial risk to human life. As the population grows that risk will become more severe.
How frequent are extinction level asteroid collisions? There’s some disagreement there but the ratio seems to be between about 1 per 40-200 million years. That seems plausibly like a low risk existential threat, but how does one compare other existential risks? How does that compare to the chance of say global thermonuclear war or the probability of a uFAI arising? If one presents a very low probability on a uFAI, or put a low probability not on a uFAI but on an AI going FOOM, then this becomes potentially more relevant.
Note also that one doesn’t really need an existential level asteroid impact to permanently ruin human life. If we use up enough resources on Earth, especially fossil fuels, then it may not be possible to bootstrap ourselves back up to modern tech levels if the tech level is substantially reduced. As we use more limited resources this risk becomes more serious. We’re not anywhere near using up the deuterium supply, but that’s also limited as is the supply of U-235 which is much closer to depletion (although again, not very close). This permanent resource crunch after a major civilization setback is enough of a risk that Nick Bostrom takes it seriously (see first link given above.) An asteroid in the 3-5 km range if it hit in a bad way could cause this sort of scenario.
The main problem with the asteroid type event is that there’s very little we can do about it. Breaking up an asteroid into little parts won’t actually do much if they still impact Earth since the total kinetic velocity delivered is still about the same. There’s more of a chance at potentially redirecting an asteroid if one attaches a solar sail or a large nuke at just the right spot. But all such options require knowing about the threat a while in advance.
There are two other point that supports a space program as a existential risk reducer which Timtyler didn’t touch on but are worth bringing up: 1) Even if we can’t construct self-sustaining colonies yet, every bit we go in that direction increases the chance that we will be able to have such colonies before any event occurs that wipes out or substantially reduces human life on Earth. 2) There are many space based extinction threats other than rogue asteroids where having advance warning even by a few days or hours could substantially reduce the risk. These include supernova risks primarily from IK Pegasi A and Betelgeuse. Our current estimates put both of these as low probability events. Under current estimates Betelgeuse is too far away given the predicted supernova size. But there’s a not insignificant chance that our models are wrong and even a small bit off could substantially ruin our day. IK Pegasi A is close enough that if it went through a a Type Ia supernova now (well a 150 years ago), it would easily be an extinction level event. It is likely that the star will go through such at some point in the future, but current estimates put that a few million years in the future. But again, modeling issues could make this drastically wrong (although the chance of a modeling error is much smaller than with Betelgeuse.) Then there are other more exotic and as yet difficult to estimate threats such as gamma ray bursts and rogue brown dwarfs.
Implying a 1⁄5000 chance this century. That’s small potatoes compared to Bio, Nano, AI.
Where are you getting your estimates of risk probability from? If by Nano you mean a nanotech gray goo scenario, then frankly that seems much less likely than 1⁄5000 in the next century. People who actually work with nanotech put that sort of scenario as extremely unlikely for a variety of reasons, including that there’s too much variation in common chemical compounds to be able to make nanotech devices which acted as universal assimilators, and there’s no clear way that such entities are going to get efficient energy resources to do so. Now, one might be able to argue that very intelligent AI might be able to solve those problems, but in that case, you’re talking about just the AI problem and nanotech becomes incidental to that.
I’m not sure what you mean by “bio”- but if you mean biological threats then this seems unlikely to be an existential level threat for the simple reason that we can see that it is very rare for a species to be wiped out by a pathogen. We might be able to make a deliberately dangerous pathogen but that requires motivation and expertise. The set of people with both the desire and capability to construct such entities is likely small, and will likely remain small for the indefinite future.
I assume “Bio, Nano, AI” to mean “any global existential threats brought on by human technology”, which is a big disjunction with plenty of unknown unknowns, and we already have one example (nuclear weapons) that could not have plausibly been predicted 50 years beforehand. Even if you discount the probabilities of hard AI takeoff or nanotech development, you’d have to have a lot of evidence in order to put such a small probability on any technological development of the next hundred years threatening global extinction.
As someone who does largely discount the threats mentioned (I believe that the operationally-significant probability for foom/grey goo is order 10^-3/10^-5, and the best-guess probability is order 10^-7/10^-7), I still endorse the logic above.
Er, maybe I was being unclear. Even if you discount a few specific scenarios, where do you get the strong evidence that no other technological existential risk with probability bigger than .001 will arise in the next hundred years, given that forecasters a century ago would have completely missed the existential risk from nuclear weapons?
I agree that cataloging near-earth objects is obviously worth a much bigger current investment than it has at present, but I think that an even bigger need exists for a well-funded group of scientists from various fields to consider such technological existential risks.
If I wanted to exterminate the human race using nanotechnology, there are two methods I would think about. First method, airborne replicators which use solar power for energy and atmospheric carbon dioxide for feedstock. Second method, nanofactories which produce large quantities of synthetic greenhouse gases. Under the first method, one should imagine a cloud of nanodust that just keeps growing until most of the CO2 is used up (at which point all plants die). Under the second method, the objective is to heat the earth until the oceans boil.
For the airborne replicator, the obvious path is “diamondoid mechanosynthesis”, as described in papers by Drexler, Merkle, Freitas and others. This is the assembly of rigid nanostructures, composed mostly of carbon atoms, through precisely coordinated deposition of small reactive clusters of atoms. To assemble diamond in this way, one might want a supply of carbon chains, which remain sequestered in narrow-diameter buckytubes until they are wanted, with the buckytubes being positioned by rigid nanomechanisms, and the carbon chains being synthesized through the capture and “cracking” of CO2 much as in plants. The replicator would have a hard-vacuum interior in which the component assembly of its progeny would occur, and a sliding or telescoping mechanism allowing temporary expansion of this interior space. The replicator would therefore have at least two configurations: a contracted minimal one, and an expanded maximal one large enough to contain a new replicator assembled in the minimal configuration.
There are surely hundreds or thousands of challenging subproblems involved in the production of such a nanoscale doomsday device—power supply, environmental viability (you would want it to disperse but to remain adrift), what to do with contaminants, to say nothing of the mechanisms and their control systems—but it would be a miracle if it was literally thermodynamically impossible to make such a thing. Cells do it, and yes they are aqueous bags of floppy proteins rather than evacuated diamond mechanisms, but I would think that has more to do with the methods available to DNA-based evolution, rather than the physical impossibility of free-living rigid nanobots. The Royal Society report to which you link hardly examines this topic. It casually cites a few qualitative criticisms made by Smalley and others, and attaches some significance to a supposed change of heart by Drexler—but in fact, Drexler simply changed his emphasis, from accident to abuse. There is no reason to expect free-living rogue replicators to emerge by accident from nanofactories, because such industrial assemblers will be tailored to operate under conditions very different to the world outside the factory. But there has been no concession that free-living nanomechanical replicators are simply impossible, and people like Freitas and Merkle who continue to work on the details of mechanosynthesis have many time expressed the worry that it looks alarmingly easy (relatively speaking) to design such devices.
As for my second method, you don’t even need free-living replicators, just mass production of the greenhouse-gas nanofactories, and a supply of appropriate ingredients.
I’m not sure if this counts as an existential threat, but I’m more concerned about a biowar wrecking civilization—enough engineered human and food diseases that civilization is unsustainable.
I can’t judge likelihood, but it’s at least a combination of plausible human motivations and technology. Your tech is plausible, but it’s hard to imagine anyone wanting not just to wipe out the human race, but also to do such damage to the biosphere.
There are a few people who’d like the human race to be gone (or at least who say they do), but as far as I know, they all want plants and animals to continue without being affected by people.
There are definitely people who would destroy the whole world if they could. Berserkers, true nihilists, people who hate life, people who simply have no empathy, dictators having a bad day. Even a few dolorous “negative utilitarians” exist who might do it as an act of mercy. But the other types are surely more numerous.
Massive overconfidence. You need to go closer to 50⁄50.
Where is your estimate coming from?
My estimate comes from the following: 1) experts suggest that the possibility is very unlikely. For example, the Royal Society official report on the dangers of nanotech decided that this sort of situation was extremely unlikely. See report here (and good Bayesians should listen to subject matter experts) 2) Every plausible form of nanotech yet investigated shows no capability of gray gooing. For example, consider DNA nanotechnology, an area we’ve had a fair bit of success both with computation and constructing machines. Yet, these work only in a small range of pH values and temperatures and often require specific specialized enzymes. Also, as with any organic nanotech, they will face competition and potentially predation from microorganisms. Inorganic nanotech faces other problems, such as less energy and far fewer options for possible chemical constructions, and already reduces the grey goo potential a lot if one isn’t using carbon.
But how did you translate “very unlikely” to “less that 1 in 5000″? Why not say 1%? or 3%? Or 1 in 10^100?
I think that I need to do an article on why one shouldn’t be so keen to assign very low probabilities to events where the only evidence is extrapolative.
Still depends on the nature of the event (Russel’s teapot). There is no default level of certainty, no magical 50⁄50.
Sure, for cases where arbitrary complexity has been added, the “default level of certainty” is 2^-(Complexity).
Unfortunately, you often have to rule intuitively. How does complexity figure in the estimation of probability of gray goo? Useful heuristic, but no silver bullet.
I think that one has to differentiate between the perfect unbiased individual rationalist who uses heuristics but ultimately makes the final decision from first principles if necessary, and the semi-rationalist community, where individual members vary in degree of motivated cognition.
The latter works better with more rigid rules and less leeway for people to believe what they want. It’s a tradeoff: random errors induced by rough-and-ready estimates, versus systematic errors induced by wishful thinking of various forms.
Less than 1 in 5000 sounds about right to me. I’m much more worried about other nano-dangers (e. g. clandestine brain washing) than grey goo.
Not only is there the problem of the technological feasibility, but even if its possible there is the still larger problem of economic feasibility. Molecular von Neumann Machines, if possible, should be vastly more difficult to develop than vastly more efficient static nano-assemblers in a controlled environment (probably vacuum?) and integrated in an economy with mixed nano- and macrotech taking advantage of specialization, economics of scale etc. The static nano-assemblers should already be ubiquitous long before molecular von Neumann Machines start to become feasible. So why develop them in the first place? For medical applications specialized medical nanobots running on glucose and cheaply mass-produced in the static nano-assemblers should also beat them. They’d be useful in space and for sending to other planets, but there wouldn’t be all that much money in that, and sending a larger probe with nano-assemblers and assorted equipment would also do.
Since there would be no overwhelming incentive against outlawing the development of MvNM doing so would be feasible, and considering how easy it should be to scare people of the gg scenario in such a world, very likely.
That pretty much leaves secret development as some sort of weapon. That would leave gg defense a military issue. Nano-assemblers should be much better at producing nano-hunters and nano-killers (or more assemblers, mining equipment, planes, rockets, bombs) than MvNM more of themselves, and nano-hunters and nano-killers much better at finding and destroying them, and there’d also be the option of using macroscopic weapons against larger concentrations.
The original discussion was not concerned with the dangers of grey goo per se, but with any extinction risk associated with nanotech. Remember, the original question, the point of the discussion, was whether asteroids were irrelevant as an x-risk.
So whilst you make good points, it seems that we now have a lost-purpose debate rather than a purposeful collaborative discussion.
Other nano-risks aren’t necessarily extinction risks, though. And while I’m sort of worried that someone might secretly use nano to rewire the brains of important people and later of everyone to absolute loyalty to them (an outcome that would be a lot better than extinction, but still pretty bad) or something along those lines it doesn’t seem obvious that there is anything effective we could spend money on now that would help protect us, unlike asteroids. At least at the levels of spending asteroid danger prevention could usefully absorb.
But now you have to catalogue all the possible risks of nanotech, and add a category for “risks I haven’t thought of”, and then claim that the total probability of all that is < 1⁄5000.
You have to consider military nanotech. You have to consider nano-terrorism and the balance of attack versus defence, you have to consider the effects of nanotech on nuclear proliferation (had you thought of that one?), etc etc etc.
I am sure that there are at least 3 nano-risk scenarios documented on the internet that you haven’t even thought of, which instantly invalidates claiming a figure as low as, say, 1⁄5000 for the extinction risk before you have considered them.
This argument reminds me of the case of physicists claiming to have an argument showing that the probability of an LHC disaster was less than 1 in a million, and Toby Ord pointing out that the probability that there was a mistake in their argument was surely > 1 in 1,000, invalidating their conclusion that total probability of an LHC disaster was < 1 in 1 million.
The question wasn’t whether nanotech is potentially more dangerous than asteroids overall, though. It was whether all money available for extential risk prevention/migitation would be better spend on nano than on space based dangers.
There doesn’t seem to be any good way to spent money so that all possible nano risks will be migitated (other than lobbying to ban all nano reseach everywhere, and I’m far from convinced that the potential dangers of nano are greater than the benefits). I’m not even sure there is a good way to spend money on migitation of any single nano risk.
The most obvious migitation/prevention technology would be really good detectors for autonomous nanobots, whether self reproducing or not. But until we know how they work and what energy source they use we can’t do all that much useful research in that direction, and spending after we know what we need would probably be much more efficient. This also looks like an issue where the military will spend such enourmous amounts once the possibilities are clear that money spent previously will not affect the result all that much.
Yes, I did, that’s one of the most ovious ones. It’s not going to be possible to prevent a nation with access to unranium from building nuclear weapons, but I think that would be the case anyway, with or without nano. The risk of private persons building them might be somewhat increased. I’m not sure if there is any need to seperate isotopes in whatever machines pre-process materials in/for nano-assemblers, or if they lead themself to be modifiable for that. Assuming they do you’d need to look at anyone who processes large amounts of sea water, or any other material that contains uranium. Perhaps you could mandate that only designs that are vulnerable to radioactivity can be sold commercially, or make the machines refuse to work with uranium in a way that is hard to remove. I don’t see how spending money now could help in any way.
I ’m not sure the probability of a serious error in the best avaiable argument against something can be considered a lower bound to the proability you should assign it in general. In the case of the LHC if there is a 1 in 20 chance of a mistake that doesn’t really change the conclusion much, a 1 in 100 chance of a mistake such that the real probablility is 1 in 100,000, and a 1 in 10,000 chance of a mistake such that the real probablility is 1 in 1000 then 1 in a million could still be roughly the correct estimate.
The 1⁄5000 number only works for the really large asteroids (> 1 km in diameter). Note that as I pointed earlier, much smaller asteroids can be locally devastating. The resources that go to finding the very large asteroids also helps track the others, reducing the chance of human life lost even outside existential risk scenarios. And as I pointed out, there are a lot of other potential space based existential risks. That said, I think you’ve made a very good point above about the many non-gray goo scenarios that make nanotech a severe potential existential risk. So I think I’ll agree that if one’s comparing probability of a nanotech existential risk scenario compared to probability of a meteorite existential risk scenario, the nanotech is more likely.
Your point about the impact of nanotech on nuclear proliferation I find particularly disturbing. The potential for nanotech to greatly increase the efficiency of enriching uranium seems deeply worrisome and that’s really the main practical limitation in building fission weapons.
Upvoted for updating. I agree that smaller asteroids are an important consideration for space; we expect about one Tunguska event per century I believe, which stands a ~5% chance of hitting a populated area as far as I know. Saving a 5% chance of the next Tunguska hitting a populated area is a good thing.
A lot of it seems to hinge on the probability you assign to those threats being developed in the next century.
Accidental grey goo doesn’t seem plausible, and purposeful destructive use of nanotech doesn’t necessarily fall in that category. We can have nanomachines that act as bioweapons, infecting people and killing them.
Are you disagreeing with something I said? I’m not sure nanotech would be better at killing that way than a designer virus, which should be a lot easier and cheaper (possibly even when accounting for the need to find a way to prevent it from spreading to your own side, if that’s necessary). Nanotech might be able to do things that a virus can’t, but that would be the sort of thing I mentioned. Anyway I don’t see how we could effectively spend money now to prevent either.
I agree with this. I disagree that there are no clear non-goo extinction risks associated with nano, and gave an example of one.
It’s relatively insignificant, compared to other sources of existential risk. Overall, it’s a vastly better investment than lipstick.
It’s not generally valid, since this diverts resources from development of other potentially relevant tech that could help with establishing a colony once the time is right.
Speed of light fail
No. We know that there are changes in a star before the supernova occurs. For example, in a Type II supernova, the radiation level initially increases linearly. For other supernova types the luminosity of the star does sometimes increase before the supernova event itself. Also, hours before a supernova, there may be a drastic increase in neutrino production.
It is also likely that more detailed observation of stars will give us a better idea what sort of more subtle signs show up prior to supernovae.
Think about it. You observe changes in a star 8 light-hours away from earth, and radio your observations back. What speed do the radio waves travel at? c. What speed does the light bearing the original observation travel at? c. What speed does the supernova blast travel at? also c. Neutrinos travel so close to c it makes no difference.
From the parent post:
The notification and the blast travel at c, but the blast is hours behind the notification.
If observed changes to a star happen well before the supernova event itself then the fact that everything is happening at c doesn’t matter. Say for example that the neutrino flux increase happens 24 hours before hand. That means we have a 24 hour warning before the supernova event. Similarly, if we see an increase in luminosity before the supernova we still get advance warning. What matters is that there is a delay between when stars show signs of supernovaing and when they actually supernova.
The point is that being closer to the star when that happens doesn’t provide you with more forewarning than if you look at it from home.
I don’t think anyone is advocating that we send actual probes to Betelgeuse or IK Pegasi. I’m confused why one would think that would even be on the table. Even if we sent a probe today at a tenth of the speed of light (well beyond our current capabilities) it will still take around 1500 years to get to IK Pegasi. I don’t know why one would even think that would be at all in the useful category.
What is helpful is having more space based observation equipment in our solar system. The more we put into space the less of a problem we have with atmospheric interference, artificial radio sources, and general light pollution. To use one specific example that would help a lot, if we had a series of optical telescopes that were spread out around the solar system we could use parallax measurements to get a better idea how far away Betelgeuse is. For a variety of reasons there’s a lot of uncertainty about how far away it is with 330 light years as a lower estimate and around 700 as an upper estimate although it seems like around 640 is where things seem to be settling down at. Given the inverse square law for radiation, this matters for a supernova concern. A difference of 300 light years corresponds to about a factor of 4 in the radiation strength. Overall, most of the interesting, practical investigation and reduction of astronomical existential risks can be done right here in our home system.
So the benefit of space-based observation is signal amplification rather than signal speed.
In a nutshell yes. And the more signal amplification we get the quicker we can detect problems before it is too late.