What is the duration of P(doom)?
What do people mean by that metric? What is x-risk for the century? Forever? For the next 10 years? Until we figured out AGI or after AGI on the road to superintelligence?
To me it’s fundamentally different because P(doom) forever must be much higher than doom over the next 10-20 years. Or is it implied that if we survive the next period means only that we figured out alignment eternally for all the next generation AIs? It’s confusing.
Thank you. This is the kind of post I wanted to write when I posted “the burden of knowing” a few days ago but I was not rational thinking at that moment.