The arguments presented are changing the goalposts. Eventual supersuperhuman AI is certainly an x-risk, but not obviously an urgent one. (E.g., climate change is bad, and the sooner we address it the better, but it’s not “urgent.”)
I don’t think we’re changing goalposts with respect to Katja’s posts, hers didn’t directly discuss timelines either and seemed to be more about “is AI x-risk a thing at all?”. And to be clear, our response isn’t meant to be a fully self-contained argument for doom or anything along those lines (see the “we’re not discussing” list at the top)---that would indeed require discussing timelines, difficulty of alignment given those timelines, etc.
On the object level, I do think there’s lots of probability mass on timelines <20 years for “AGI powerful enough to cause an existential catastrophe”, so it seems pretty urgent. FWIW, climate change also seems urgent to me (though not a big x-risk; maybe that’s what you mean?)
The arguments presented are changing the goalposts. Eventual supersuperhuman AI is certainly an x-risk, but not obviously an urgent one. (E.g., climate change is bad, and the sooner we address it the better, but it’s not “urgent.”)
I don’t think we’re changing goalposts with respect to Katja’s posts, hers didn’t directly discuss timelines either and seemed to be more about “is AI x-risk a thing at all?”. And to be clear, our response isn’t meant to be a fully self-contained argument for doom or anything along those lines (see the “we’re not discussing” list at the top)---that would indeed require discussing timelines, difficulty of alignment given those timelines, etc.
On the object level, I do think there’s lots of probability mass on timelines <20 years for “AGI powerful enough to cause an existential catastrophe”, so it seems pretty urgent. FWIW, climate change also seems urgent to me (though not a big x-risk; maybe that’s what you mean?)