Due to the absence of any signs of intelligence out there, especially paper-clippers burning the cosmic commons, we might conclude that unfriendly AI could not be the most dangerous existential risk that we should worry about.
I view this as one of the single best arguments against risks from paperclippers. I’m a little concerned that it hasn’t been dealt with properly by SIAI folks—aside from a few comments by Carl Shulman on Katja’s blog.
I suspect the answer may be something to do with anthropics—but I’m not really certain of exactly what it is.
I view this as one of the single best arguments against risks from paperclippers. I’m a little concerned that it hasn’t been dealt with properly by SIAI folks—aside from a few comments by Carl Shulman on Katja’s blog.
The Fermi Paradox was considered a paradox even before anybody started talking about paperclippers. And even if we knew for certain that superintelligence was impossible, the Fermi Paradox would still remain a mystery—it’s not paperclippers (one possible form of colonizer) in particular that are hard to reconcile with the Fermi Paradox, it’s the idea of colonizers in general.
Simply the fact that the paradox exists says little about the likelyhood of paperclippers, though it does somewhat suggest that we might run into some even worse x-risk before the paperclippers show up. (What value you attach to that “somewhat” depends on whether you think it’s reasonable to presume that we’ve already passed the Great Filter.)
One important thing to keep in mind is that although Katja emphasizes this argument in a context of anthropics, the argument goes through even if one hasn’t ever heard of anthropic arguments at all simply in terms of the Great Filter.
the argument goes through even if one hasn’t ever heard of anthropic arguments
Only very weakly. Various potential early filters are quite unconstrained by evidence, so that our uncertainty spans many orders of magnitude. Abiogenesis, the evolution of complex life and intelligence, creation of suitable solar systems for life, etc could easily cost many orders of magnitude. Late-filter doomsdays like extinction from nukes or bioweapons would have to be exceedingly convergent (across diverse civilizations) to make a difference of many orders of magnitude for the Filter (essentially certain doom).
Unless you were already confident that the evolution of intelligent life is common, or consider convergent doom (99.9999% of civilizations nuke themselves into extinction, regardless of variation in history or geography or biology) pretty likely the non-anthropic Fermi update seems pretty small.
I’m not sure which part of the argument you are referring to. Are you talking about estimates that most of the Great Filter is in front of us? If so, I’d be inclined to tentatively agree. (Although I’ve been updating more in the direction of more filtration in front for a variety of reasons.) I was talking about that the observation that we shouldn’t expect AI to be a substantial fraction of the Great Filter. Katja’s observation in that context is simply a comment about what our light cone looks like.
Are you talking about estimates that most of the Great Filter is in front of us? If so, I’d be inclined to tentatively agree.
OK.
I was talking about that the observation that we shouldn’t expect AI to be a substantial fraction of the Great Filter.
Sure. I was saying that this alone (sans SIA) is much less powerful if we assign much weight to early filters. Say (assuming we’re not in a simulation) you assigned 20% probability to intelligence being common and visible (this does invoke observation selection problems inevitably, since colonization could preempt human evolution), 5% to intelligence being common but invisible (environmentalist Von Neumann probes enforce low-visibility; or maybe the interstellar medium shreds even slow starships) 5% to intelligence arising often and self-destructing, and 70% to intelligence being rare. Then you look outside, rule out “common and visible,” and update to 6.25% probability of invisible aliens, 6.25% probability of convergent self-destruction in a fertile universe, and 87.5% probability that intelligence is rare. With the SIA (assuming we’re not in a simulation, even though the SIA would make us confident that we were) we would also chop off the “intelligence is rare” possibility, and wind up with 50% probability of invisible aliens and 50% probability of convergent self-destruction.
And, as Katja agrees, SIA would make us very confident that AI or similar technologies will allow the production of vast numbers of simulations with our experiences, i.e. if we bought SIA we should think that we were simulations, and in the “outside world” AI was feasible, but not to have strong conclusions about late or early filters (within many orders of magnitude) about the outside world.
I agree with most of this. The relevant point is about AI in particular. More specifically, if an AGI is likely to start expanding to control its light cone at a substantial fraction of the speed of light, and this is a major part of the Filter, then we’d expect to see it. In contrast to something like nanotech for example that if it destroys civilization on a planet will be hard for observers to notice. Anthropic approaches (both SIA and SSA) argue for large amounts of filtration in front. The point is that observation suggests that AGI isn’t a major part of that filtration if that’s correct.
An example that might help illustrate the point better. Imagine that someone is worried that the filtration of civilizations generally occurs due to them running some sort of physics experiment that causes a false vacuum collapse that expands at less than the speed of light (say c/10,000). We can discount the likelyhood of such an event because we would see from basic astronomy the result of the civilizations that have wiped themselves out in how they impact the stars near them.
I view this as one of the single best arguments against risks from paperclippers. I’m a little concerned that it hasn’t been dealt with properly by SIAI folks—aside from a few comments by Carl Shulman on Katja’s blog.
I suspect the answer may be something to do with anthropics—but I’m not really certain of exactly what it is.
The Fermi Paradox was considered a paradox even before anybody started talking about paperclippers. And even if we knew for certain that superintelligence was impossible, the Fermi Paradox would still remain a mystery—it’s not paperclippers (one possible form of colonizer) in particular that are hard to reconcile with the Fermi Paradox, it’s the idea of colonizers in general.
Simply the fact that the paradox exists says little about the likelyhood of paperclippers, though it does somewhat suggest that we might run into some even worse x-risk before the paperclippers show up. (What value you attach to that “somewhat” depends on whether you think it’s reasonable to presume that we’ve already passed the Great Filter.)
One important thing to keep in mind is that although Katja emphasizes this argument in a context of anthropics, the argument goes through even if one hasn’t ever heard of anthropic arguments at all simply in terms of the Great Filter.
Only very weakly. Various potential early filters are quite unconstrained by evidence, so that our uncertainty spans many orders of magnitude. Abiogenesis, the evolution of complex life and intelligence, creation of suitable solar systems for life, etc could easily cost many orders of magnitude. Late-filter doomsdays like extinction from nukes or bioweapons would have to be exceedingly convergent (across diverse civilizations) to make a difference of many orders of magnitude for the Filter (essentially certain doom).
Unless you were already confident that the evolution of intelligent life is common, or consider convergent doom (99.9999% of civilizations nuke themselves into extinction, regardless of variation in history or geography or biology) pretty likely the non-anthropic Fermi update seems pretty small.
I’m not sure which part of the argument you are referring to. Are you talking about estimates that most of the Great Filter is in front of us? If so, I’d be inclined to tentatively agree. (Although I’ve been updating more in the direction of more filtration in front for a variety of reasons.) I was talking about that the observation that we shouldn’t expect AI to be a substantial fraction of the Great Filter. Katja’s observation in that context is simply a comment about what our light cone looks like.
OK.
Sure. I was saying that this alone (sans SIA) is much less powerful if we assign much weight to early filters. Say (assuming we’re not in a simulation) you assigned 20% probability to intelligence being common and visible (this does invoke observation selection problems inevitably, since colonization could preempt human evolution), 5% to intelligence being common but invisible (environmentalist Von Neumann probes enforce low-visibility; or maybe the interstellar medium shreds even slow starships) 5% to intelligence arising often and self-destructing, and 70% to intelligence being rare. Then you look outside, rule out “common and visible,” and update to 6.25% probability of invisible aliens, 6.25% probability of convergent self-destruction in a fertile universe, and 87.5% probability that intelligence is rare. With the SIA (assuming we’re not in a simulation, even though the SIA would make us confident that we were) we would also chop off the “intelligence is rare” possibility, and wind up with 50% probability of invisible aliens and 50% probability of convergent self-destruction.
And, as Katja agrees, SIA would make us very confident that AI or similar technologies will allow the production of vast numbers of simulations with our experiences, i.e. if we bought SIA we should think that we were simulations, and in the “outside world” AI was feasible, but not to have strong conclusions about late or early filters (within many orders of magnitude) about the outside world.
I agree with most of this. The relevant point is about AI in particular. More specifically, if an AGI is likely to start expanding to control its light cone at a substantial fraction of the speed of light, and this is a major part of the Filter, then we’d expect to see it. In contrast to something like nanotech for example that if it destroys civilization on a planet will be hard for observers to notice. Anthropic approaches (both SIA and SSA) argue for large amounts of filtration in front. The point is that observation suggests that AGI isn’t a major part of that filtration if that’s correct.
An example that might help illustrate the point better. Imagine that someone is worried that the filtration of civilizations generally occurs due to them running some sort of physics experiment that causes a false vacuum collapse that expands at less than the speed of light (say c/10,000). We can discount the likelyhood of such an event because we would see from basic astronomy the result of the civilizations that have wiped themselves out in how they impact the stars near them.
Katja’s blog post on the topic is here.
The claim that the argument there is significant depends strongly on this—where I made some critical comments.