Precise P(doom) isn’t very important for prioritization or strategy

People spend time trying to determine the probability that AI will become an existential risk, sometimes referred to as P(doom).

One point that I think gets missed in this discussion is that a precise estimate of P(doom) isn’t that important for prioritization or strategy.

I think it’s plausible that P(doom) is greater than 1%. For prioritization, even a 1% chance of existential catastrophe from AI this century would be sufficient to make AI the most important existential risk. The probability of existential catastrophe from nuclear war, pandemics, and other catastrophes seems lower than 1%. Identifying exactly where P(doom) lies in the 1%-99% range doesn’t change priorities much.

Like AI timelines, its unclear that changing P(doom) would change our strategy towards alignment. Changing P(doom) shouldn’t dramatically change which projects we focus on, since we probably need to try as many things as possible, and quickly. I don’t think the list of projects or the resources we dedicate to them would change much in the 1% or 99% worlds. Are there any projects that you would robustly exclude from consideration if P(doom) was 1-10% but include if P(doom) was 90-99% (and vice versa)?

I think communicating P(doom) can be useful for other reasons like assessing progress or getting a sense of someone’s priors, but it doesn’t seem that important overall.