[Question] In search for plausible scenarios of AI takeover, or the Takeover Argument

I’m searching for writeups on scenarios of AI takeover. I’m interested in ”?????” in
“1. AGI is created
2. ??????
3. AGI takes over the world”
Yeah, I played the paperclips game. But I’m searching for a serious writeup of possible steps rogue AI can take to take over the world and why(if) it inevitably will.
I’ve come to understand that it’s kinda taken for granted that unaligned foom equals catastrophic event.
I want to understand if it’s just me that’s missing something, or if the community as a whole is way too focused on alignment “before”, whereas it might also be productive to model the “aftermath”. Make the world more robust to AI takeover, instead of taking for granted that we’re done for if that happens.
And maybe the world is already robust to such takeovers. After all, there are already superintelligent systems out in the world, but none of them have taken it over just yet (I’m talking governments and companies).
To sum it up: I guess I’m searching for something akin to the Simulation Argument by Bostrom, let’s call it “the Takeover Argument”.