[Question] Conditional on living in a AI safety/​alignment by default universe, what are the implications of this assumption being true?

I’m specifically asking this question because I suspect there would be some important changes to the assumptions made on LW, as well as change what’s good to do.

Conditional on AI alignment/​safety being the default case, and the cases where AI alignment turned out to not require much effort, or at least it was amenable to standard ML/​AI techniques, and AI misalignment effectively being synonymous with AI misuse, what are the most important implications of this assumed truth for the LessWrong community?

In particular, I’m thinking of scenarios where rogue AI is easy to align or make safe by default, such that the problems shift from rogue AI to humans using AI.