I no longer believe that anthropic probabilities make sense (see http://lesswrong.com/lw/891/anthropic_decision_theory_i_sleeping_beauty_and/ and subsequent posts—search “anthropic decision theory” in less wrong), only anthropic decisions do. Applying this to these situations, total utilitarians should (roughly) act as if there was a late filter, while average utilitarians and selfish beings should act as if was an early filter.
I’m extremely curious: how did you come to conclude that the Great Filter was probably a particular evolutionary leap?
Using bad reasoning: intuition and subjective judgement. The chances of a late great filter just don’t seem high enough...
What do you make of Katja Grace’s SIA-based argument for a late Filter?
I no longer believe that anthropic probabilities make sense (see http://lesswrong.com/lw/891/anthropic_decision_theory_i_sleeping_beauty_and/ and subsequent posts—search “anthropic decision theory” in less wrong), only anthropic decisions do. Applying this to these situations, total utilitarians should (roughly) act as if there was a late filter, while average utilitarians and selfish beings should act as if was an early filter.