I’ve noticed that for many people (including myself), their subjective P(doom) stays surprisingly constant over time. And I’ve wondered if there’s something like “conservation of subjective P(doom)”—if you become more optimistic about some part of AI going better, then you tend to become more pessimistic about some other part, such that your P(doom) stays constant. I’m like 50% confident that I myself do something like this.
(ETA: Of course, there are good reasons subjective P(doom) might remain constant, e.g. if most of your uncertainty is about the difficulty of the underlying alignment problem and you don’t think we’ve been learning much about that.)
I’ve noticed that for many people (including myself), their subjective P(doom) stays surprisingly constant over time. And I’ve wondered if there’s something like “conservation of subjective P(doom)”—if you become more optimistic about some part of AI going better, then you tend to become more pessimistic about some other part, such that your P(doom) stays constant. I’m like 50% confident that I myself do something like this.
(ETA: Of course, there are good reasons subjective P(doom) might remain constant, e.g. if most of your uncertainty is about the difficulty of the underlying alignment problem and you don’t think we’ve been learning much about that.)