I agree. Note that the very concept of “bad news” doesn’t make sense apart from a hypothetical where you get to choose to do something about determining these news one way or another. Thus CronoDAS’s comment actually exemplifies another reason for the error: if the hypothetical decision is only able to vary the extent of a late great filter, as opposed to shifting the timing of a filter, it’s clear that discovering powerful great filter is “bad news” according to such metric (because it’s powerful, as opposed to because it’s late).
Note that the very concept of “bad news” doesn’t make sense apart from a hypothetical where you get to choose to do something about determining these news one way or another.
I don’t think that that’s the concept of “bad news” that Hanson and Bostrom are using. If you have background knowledge X, then a piece of information N is “bad news” if your expected utility conditioned on N & X is less than your expected utility conditioned on X alone.
Let our background knowledge X include the fact that we have secured all the utility that we received up till now. Suppose also that, when we condition only on X, the Great Filter is significantly less than certain to be in our future. Let N be the news that a Great Filter lies ahead of us. If we were to learn N, then, as Wei Dai pointed out, we would be obliged to devote more resources to mitigating the Great Filter. Therefore, our expected utility over our entire history would be less than it is when we condition only on X. That is why N is bad news.
I agree. Note that the very concept of “bad news” doesn’t make sense apart from a hypothetical where you get to choose to do something about determining these news one way or another. Thus CronoDAS’s comment actually exemplifies another reason for the error: if the hypothetical decision is only able to vary the extent of a late great filter, as opposed to shifting the timing of a filter, it’s clear that discovering powerful great filter is “bad news” according to such metric (because it’s powerful, as opposed to because it’s late).
I don’t think that that’s the concept of “bad news” that Hanson and Bostrom are using. If you have background knowledge X, then a piece of information N is “bad news” if your expected utility conditioned on N & X is less than your expected utility conditioned on X alone.
Let our background knowledge X include the fact that we have secured all the utility that we received up till now. Suppose also that, when we condition only on X, the Great Filter is significantly less than certain to be in our future. Let N be the news that a Great Filter lies ahead of us. If we were to learn N, then, as Wei Dai pointed out, we would be obliged to devote more resources to mitigating the Great Filter. Therefore, our expected utility over our entire history would be less than it is when we condition only on X. That is why N is bad news.