I’m not sure which part of my post you’re responding to with that comment, but perhaps there is a misunderstanding. The two scenarios involving Omega are only meant to establish that a late great filter should not be considered worse news than an early great filter. They are not intended to correspond to any decisions that we actually have to make. The decision mentioned in the last paragraph, about how much resources to spend on existential risk reduction, which we do have to make, is not directly related to those two scenarios.
“The two scenarios involving Omega are only meant to establish that a late great filter should not be considered worse news than an early great filter.”
I honestly think this would have been way, way, way clearer if you had dropped the Omega decision theory stuff, and just pointed out that, given great filters of equal probability, choosing an early great filter over a late great filter would entail wiping out the history of humanity in addition to the galactic civilization that we could build, which most of us would definitely see as worse.
Point taken, but I forgot to mention that the Omega scenarios are also meant to explain why we might feel that the great filter being late is worse news than the great filter being early: an actual human, faced with the decision in scenario 2, might be tempted to choose the early filter.
I’ll try to revise the post to make all this clearer. Thanks.
But, in universes with early filters, I don’t exist. Therefore anything I do to favor late filters over early filters is irrelevant, because I can’t affect universes in which I don’t exist.
(And by “I”, I mean anything that UDT would consider “me”.)
I’m not sure which part of my post you’re responding to with that comment, but perhaps there is a misunderstanding. The two scenarios involving Omega are only meant to establish that a late great filter should not be considered worse news than an early great filter. They are not intended to correspond to any decisions that we actually have to make. The decision mentioned in the last paragraph, about how much resources to spend on existential risk reduction, which we do have to make, is not directly related to those two scenarios.
“The two scenarios involving Omega are only meant to establish that a late great filter should not be considered worse news than an early great filter.”
I honestly think this would have been way, way, way clearer if you had dropped the Omega decision theory stuff, and just pointed out that, given great filters of equal probability, choosing an early great filter over a late great filter would entail wiping out the history of humanity in addition to the galactic civilization that we could build, which most of us would definitely see as worse.
Point taken, but I forgot to mention that the Omega scenarios are also meant to explain why we might feel that the great filter being late is worse news than the great filter being early: an actual human, faced with the decision in scenario 2, might be tempted to choose the early filter.
I’ll try to revise the post to make all this clearer. Thanks.
But, in universes with early filters, I don’t exist. Therefore anything I do to favor late filters over early filters is irrelevant, because I can’t affect universes in which I don’t exist.
(And by “I”, I mean anything that UDT would consider “me”.)