I just thought of a flaw in my analysis, which is that if it’s intractable to make AI alignment more or less likely (and intractable to make the development of transformative AI more or less safe), then accelerating AI timelines actually seems good because the benefits to people post-AGI if it goes well (utopian civilization for longer) seem to outweigh the harms to people pre-AGI if goes badly (everyone on Earth dies sooner). Will think about this more.
I just thought of a flaw in my analysis, which is that if it’s intractable to make AI alignment more or less likely (and intractable to make the development of transformative AI more or less safe), then accelerating AI timelines actually seems good because the benefits to people post-AGI if it goes well (utopian civilization for longer) seem to outweigh the harms to people pre-AGI if goes badly (everyone on Earth dies sooner). Will think about this more.