I agree it’s likely the Great Filter is behind us. And I think you’re technically right, most filters are behind us, and many are far in the past, so the “average expected date of the Great Filter” shifts backward. But, quoting my other comment:
Every other possible filter would gain equally, unless you think this implies that maybe we should discount other evolutionary steps more as well. But either way, that’s still bad on net because we lose probability mass on steps behind us.
So even though the “expected date” shifts backward, the odds for “behind us or ahead of us” shifts toward “ahead of us”.
Let me put it this way: let’s say we have 10 possible filters behind us, and 2 ahead of us. We’ve “lost” one filter behind us due to new information. So, 9 filters behind us gain a little probability mass, 1 filter behind us loses most probability mass, and 2 ahead of us gain a little probability mass. This does increase the odds that the filter is far behind us, since “animal with tool-use intelligence” is a relatively recent filter. But, because “animal with tool-use intelligence” was already behind us and a small amount of that “behind us” probability mass has now shifted to filters ahead of us, the ratio between all past filters and all future filters has adjusted slightly toward future filters.
I see your point, yet if the given evidence is 95% in the past, the 5% in the future only gets a marginal amount added to it, I do still like the idea of crossing off potential filters to see where the risks are so fair enough!
I agree it’s likely the Great Filter is behind us. And I think you’re technically right, most filters are behind us, and many are far in the past, so the “average expected date of the Great Filter” shifts backward. But, quoting my other comment:
So even though the “expected date” shifts backward, the odds for “behind us or ahead of us” shifts toward “ahead of us”.
Let me put it this way: let’s say we have 10 possible filters behind us, and 2 ahead of us. We’ve “lost” one filter behind us due to new information. So, 9 filters behind us gain a little probability mass, 1 filter behind us loses most probability mass, and 2 ahead of us gain a little probability mass. This does increase the odds that the filter is far behind us, since “animal with tool-use intelligence” is a relatively recent filter. But, because “animal with tool-use intelligence” was already behind us and a small amount of that “behind us” probability mass has now shifted to filters ahead of us, the ratio between all past filters and all future filters has adjusted slightly toward future filters.
I see your point, yet if the given evidence is 95% in the past, the 5% in the future only gets a marginal amount added to it, I do still like the idea of crossing off potential filters to see where the risks are so fair enough!