Maybe not explicitly, but I keep seeing people refer to “the great filter” as if it was a single thing. But maybe you’re right and I’m reading too much into this.
Normal filters have multiple layers too—for example first you can have the screen that keeps the large particles out. Then you have the activated charcoal, and then the paper to keep the charcoal out, and for high-end filters you finish off with a porous membrane.
And yet, it’s all one filter.
SO… we’re speculating here on where the bulk of the work is being done. The strength could be evenly distributed from beginning to end, but it could also be lumpy.
The real filter could be a combination of an early one and a late one, of course. But, unless the factors are exquisitely well-balanced, its likely that there is one location in civilizational development where most of the filter lies (ie where the probability of getting to the next stage is the lowest).
That’s very non-obvious to me; I can’t see why there couldn’t be (say) a step with probability around 1e-12, one with probability around 1e-7, one with probability around 1e-5, one with probability around 1e-2, and a dozen ones with joint probability around 1e-1, so that no one step comprises the majority (logarithmically) of the filter.
Good point, but note that in your example, the top filter does a lot more than the runner-up. There could be a lot of value in knowing what that top one is, even if the others combined are more important. (But that wouldn’t answer the question, how much danger do we still face? Which may be the biggest question. So again—good point.)
But that wouldn’t answer the question, how much danger do we still face? Which may be the biggest question.
It is. But even there, I’m under the impression that many people are focussing on the answers “most of it” and “hardly any” neglecting everything in between.
A few possible hypotheses (where by “supercomputers” I mean ‘computation capabilities comparable to those available to people on Earth in 2014’):
Nearly all the filter ahead. A sizeable fraction of all star systems have civilizations with supercomputers, and about one in 1e24 of them will take over their future light cone.
Most of the filter ahead. There are about a million civilizations with supercomputers per galaxy in average, and about one in 1e18 of them will take over their future light cone.
Halfway through the filter. There is about one civilization with supercomputers per galaxy in average, and about one in 1e12 of them will take over their future light cone.
Most of the filter behind. There is about one civilization with supercomputers per million galaxies in average, and about one in a million of them will take over their future light cone.
Nearly all of the filter behind. There have been few or no other civilizations with supercomputers than us in our past light cone, and we have a sizeable chance of taking over our future light cone.
ISTM people push too much of their probability mass near the ends and leave too little in the middle for some reason. (In particular, I’m under the impression that certain singularitarians unduly privilege hypothesis 5 just because they can’t imagine a reason why we will fail to take over the future light cone—which is way too Inside View for me—and they think the only thing left to know is whether the takeover will be Good or Bad.)
I think there’s pretty little practical difference between 3 and 4 (unless you value us taking over the future light cone at somewhere between 1e6 and 1e12, on some appropriate scale), and not terribly much between 1 and 4 either, so maybe people are conflating anything below 5 together and that’s what they actually mean when they sound like they say 1?
(Of course there’s 6. Planetarium hypothesis—someone in our past light cone has already taken over their future light cone, but they don’t want us to know for some reason or another—but I don’t think that more than 99.99% of possible superintelligences would give a crap about us inferior beings, so this can explain at most about one sixth of the filter. Just add “visibly” before “take over” everywhere above.)
BTW, I think that 1 is all but ruled out (and 2 is slightly disfavoured) by the failure of SETI (at least if you interpret the present tense to mean ‘as of the time their wordline intersects the surface of our past light cone’), and 5 is unlikely because of Moloch (if he’s scary enough to stop cancer from killing whales he’s totally likely to be scary enough to stop > 99.9% of civilizations with supercomputers from taking over the light cone).
My own probability distribution has a broad peak somewhere around 4 with a long-ish tail to the left.
The real filter could be a combination of an early one and a late one, of course. But, unless the factors are exquisitely well-balanced, its likely that there is one location in civilizational development where most of the filter lies (ie where the probability of getting to the next stage is the lowest).
That doesn’t sound like it admits the possibility of twelve, independent, roughly equally balanced filters.
Maybe I am being uncharitable, but when Sophronius asks “[c]an somebody explain to me why people generally assume that the great filter has a single cause?” and you reply “I don’t think anyone really assumes that”, I have to admit that I’ve always seen people think of the Great Filter in terms of one main cause (e.g., look to the poll in this thread where people choose one particular cause), and not in terms of multiple causes.
Though, you’re right that no one has said that multiple causes is outright impossible. And you may be right that one main cause makes a lot more sense. But I do think Sophronius raises a question worth considering, at least a bit.
At least in other Less Wrong posts and comments on the topic, the question is usually presented probabilistically, as in “Does the bulk of the great filter lie ahead of us or behind us?”
Though, it is usually not specified whether the writer is asking where the greatest number of candidates gets ruled out, or the greatest fraction. Maybe there are ten factors that each eliminate 90% of candidates prior to the technological civilization level (leaving maybe a hundred near-type-I civilizations per galaxy), but one factor (AI? self-destruction by war or ecological collapse? Failing to colonize other worlds before a stray cosmic whatever destroys your homeworld?) takes out 99% of what is left. In that case nearly all the great filter would be behind us, and our odds still wouldn’t be good.
I don’t think anyone really assumes that.
Maybe not explicitly, but I keep seeing people refer to “the great filter” as if it was a single thing. But maybe you’re right and I’m reading too much into this.
Normal filters have multiple layers too—for example first you can have the screen that keeps the large particles out. Then you have the activated charcoal, and then the paper to keep the charcoal out, and for high-end filters you finish off with a porous membrane.
And yet, it’s all one filter.
SO… we’re speculating here on where the bulk of the work is being done. The strength could be evenly distributed from beginning to end, but it could also be lumpy.
From the OP:
That’s very non-obvious to me; I can’t see why there couldn’t be (say) a step with probability around 1e-12, one with probability around 1e-7, one with probability around 1e-5, one with probability around 1e-2, and a dozen ones with joint probability around 1e-1, so that no one step comprises the majority (logarithmically) of the filter.
Good point, but note that in your example, the top filter does a lot more than the runner-up. There could be a lot of value in knowing what that top one is, even if the others combined are more important. (But that wouldn’t answer the question, how much danger do we still face? Which may be the biggest question. So again—good point.)
It is. But even there, I’m under the impression that many people are focussing on the answers “most of it” and “hardly any” neglecting everything in between.
A few possible hypotheses (where by “supercomputers” I mean ‘computation capabilities comparable to those available to people on Earth in 2014’):
Nearly all the filter ahead. A sizeable fraction of all star systems have civilizations with supercomputers, and about one in 1e24 of them will take over their future light cone.
Most of the filter ahead. There are about a million civilizations with supercomputers per galaxy in average, and about one in 1e18 of them will take over their future light cone.
Halfway through the filter. There is about one civilization with supercomputers per galaxy in average, and about one in 1e12 of them will take over their future light cone.
Most of the filter behind. There is about one civilization with supercomputers per million galaxies in average, and about one in a million of them will take over their future light cone.
Nearly all of the filter behind. There have been few or no other civilizations with supercomputers than us in our past light cone, and we have a sizeable chance of taking over our future light cone.
ISTM people push too much of their probability mass near the ends and leave too little in the middle for some reason. (In particular, I’m under the impression that certain singularitarians unduly privilege hypothesis 5 just because they can’t imagine a reason why we will fail to take over the future light cone—which is way too Inside View for me—and they think the only thing left to know is whether the takeover will be Good or Bad.)
I think there’s pretty little practical difference between 3 and 4 (unless you value us taking over the future light cone at somewhere between 1e6 and 1e12, on some appropriate scale), and not terribly much between 1 and 4 either, so maybe people are conflating anything below 5 together and that’s what they actually mean when they sound like they say 1?
(Of course there’s 6. Planetarium hypothesis—someone in our past light cone has already taken over their future light cone, but they don’t want us to know for some reason or another—but I don’t think that more than 99.99% of possible superintelligences would give a crap about us inferior beings, so this can explain at most about one sixth of the filter. Just add “visibly” before “take over” everywhere above.)
BTW, I think that 1 is all but ruled out (and 2 is slightly disfavoured) by the failure of SETI (at least if you interpret the present tense to mean ‘as of the time their wordline intersects the surface of our past light cone’), and 5 is unlikely because of Moloch (if he’s scary enough to stop cancer from killing whales he’s totally likely to be scary enough to stop > 99.9% of civilizations with supercomputers from taking over the light cone).
My own probability distribution has a broad peak somewhere around 4 with a long-ish tail to the left.
From the article:
That doesn’t sound like it admits the possibility of twelve, independent, roughly equally balanced filters.
You’re being uncharitable. “[It’s] likely [that X]” doesn’t exclude the possibility of non-X.
If you know nothing about a probability distribution, it is more likely that it has one absolute maximum than more than one.
Maybe I am being uncharitable, but when Sophronius asks “[c]an somebody explain to me why people generally assume that the great filter has a single cause?” and you reply “I don’t think anyone really assumes that”, I have to admit that I’ve always seen people think of the Great Filter in terms of one main cause (e.g., look to the poll in this thread where people choose one particular cause), and not in terms of multiple causes.
Though, you’re right that no one has said that multiple causes is outright impossible. And you may be right that one main cause makes a lot more sense. But I do think Sophronius raises a question worth considering, at least a bit.
At least in other Less Wrong posts and comments on the topic, the question is usually presented probabilistically, as in “Does the bulk of the great filter lie ahead of us or behind us?”
Though, it is usually not specified whether the writer is asking where the greatest number of candidates gets ruled out, or the greatest fraction. Maybe there are ten factors that each eliminate 90% of candidates prior to the technological civilization level (leaving maybe a hundred near-type-I civilizations per galaxy), but one factor (AI? self-destruction by war or ecological collapse? Failing to colonize other worlds before a stray cosmic whatever destroys your homeworld?) takes out 99% of what is left. In that case nearly all the great filter would be behind us, and our odds still wouldn’t be good.