What are your very long timeline expectations, for 2045+ or 2055+ AGI (automated AI R&D, sure)? That’s where I expect most of the rare futures with humanity not permanently disempowered to be, though the majority even of these long timelines will still result in permanent disempowerment (or extinction).
I think it takes at least about 10 years to qualitatively transform an active field of technical study or change the social agenda, so 2-3 steps of such change might have a chance of sufficiently reshaping how the world thinks about AI x-risk and what technical tools are available for shaping minds of AGIs, in order to either make a human-initiated lasting Pause plausible, or to have the means of aligning AGIs in an ambitious sense.
What are your very long timeline expectations, for 2045+ or 2055+ AGI (automated AI R&D, sure)? That’s where I expect most of the rare futures with humanity not permanently disempowered to be, though the majority even of these long timelines will still result in permanent disempowerment (or extinction).
I think it takes at least about 10 years to qualitatively transform an active field of technical study or change the social agenda, so 2-3 steps of such change might have a chance of sufficiently reshaping how the world thinks about AI x-risk and what technical tools are available for shaping minds of AGIs, in order to either make a human-initiated lasting Pause plausible, or to have the means of aligning AGIs in an ambitious sense.