Is it time to talk about AI doomsday prepping yet?

Given the shortening timelines (https://​​www.lesswrong.com/​​posts/​​K4urTDkBbtNuLivJx/​​why-i-think-strong-general-ai-is-coming-soon, https://​​www.lesswrong.com/​​posts/​​CvfZrrEokjCu3XHXp/​​ai-practical-advice-for-the-worried), perhaps it’s time to think about what “plan B” should actually be if the “plan A” of solving the alignment problem does not succeed.

For many people the answer is a simple “then we will all die”. But let’s suppose for a moment that we don’t get off the hook that easily—that AGI doomsday happens in such a way that there are survivors.

What can one do right now to maximize one’s chances of being among the survivors if unaligned AGI is created? What do you think an AI prepper would be like, compared and contrasted to, for example, a nuclear war prepper?