The place to situate the disagreement for mainstream skeptics of what Eliezer calls “rapid capability gain” might be something like: “Once we have AGI, is it more likely to take 2 subjective years to blow past human scientific reasoning in the way AlphaZero blew past human chess reasoning, or 10 subjective years?” I often phrase the MIRI position along the lines of “AGI destroys or saves the world within 5 years of being developed”.
That’s just talking in terms of widely held views in the field, though. I think that e.g. MIRI/Christiano disagreements are less about whether “months” versus “years” is the right timeframe, and more about things like: “Before we get AGI, will we have proto-AGI that’s nearly as good as AGI in all strategically relevant capabilities?” And the MIRI/Hanson disagreements are maybe less about months vs. years and more about whether AGI will be a discrete software product invented at a particular time and place at all.
The place to situate the disagreement for mainstream skeptics of what Eliezer calls “rapid capability gain” might be something like: “Once we have AGI, is it more likely to take 2 subjective years to blow past human scientific reasoning in the way AlphaZero blew past human chess reasoning, or 10 subjective years?” I often phrase the MIRI position along the lines of “AGI destroys or saves the world within 5 years of being developed”.
That’s just talking in terms of widely held views in the field, though. I think that e.g. MIRI/Christiano disagreements are less about whether “months” versus “years” is the right timeframe, and more about things like: “Before we get AGI, will we have proto-AGI that’s nearly as good as AGI in all strategically relevant capabilities?” And the MIRI/Hanson disagreements are maybe less about months vs. years and more about whether AGI will be a discrete software product invented at a particular time and place at all.
I tend to agree with Robin that AGI won’t be a discrete product, though that’s much less confident.