[Question] Convince me that humanity *isn’t* doomed by AGI

Earlier this week, I asked the LessWrong community to convince me that humanity is as doomed by AGI as Yudkowsky et al., seems to believe. The responses were quite excellent (and even included a comment from Yudkowsky himself, who promptly ripped one of my points to shreds).

Well, you definitely succeeded in freaking me out.

Now I’d like to ask the community the opposite question: what are your best arguments for why we shouldn’t be concerned about a nearly inevitable AGI apocalypse? To start things off, I’ll link to this excellent comment from Quintin Pope, which has not yet received any feedback, as far as I’m aware.

What makes you more optimistic about alignment?