“it could” is short by LW standards? News to me (a lesswrong). I would have guessed that most of us put at least 8% of the outcome distribution before 10 years.
But note they are talking about ASI, not just AGI, and before 8 years, not 10 years. (Of course it is unclear what credence the “could” corresponds to.)
Still. It is widely understood by those who I consider experts that ASI will follow shortly after AGI. AGI will appear in the context of partial automation of AI R&D, and itself will enable full automation of AI R&D, leading to an intelligence explosion.
“it could” is short by LW standards? News to me (a lesswrong). I would have guessed that most of us put at least 8% of the outcome distribution before 10 years.
But note they are talking about ASI, not just AGI, and before 8 years, not 10 years. (Of course it is unclear what credence the “could” corresponds to.)
Still. It is widely understood by those who I consider experts that ASI will follow shortly after AGI. AGI will appear in the context of partial automation of AI R&D, and itself will enable full automation of AI R&D, leading to an intelligence explosion.