[Question] How Politics interacts with AI ?

I believe two things about rulers (politicians, CEOs of big orgs):

  1. They only give others as much freedom as necessary to be useful to achieve ruler’s goals

  2. They don’t want actors more powerful than them anywhere nearby

From these I intuit that:

  1. Rulers will not support development of powerful AGI as it might threaten to overpower them

  2. Rulers might get rid of humans as soon as an AI can achieve goals more efficiently (but that’s much lower bar for intelligence and power of AI, than that needed to overpower the Ruler)

Thus my immediate fears are not so much about aligning super-human AGI, but about aligning Rulers with needs of their constituents—for example a future in which we never get to smarter than humans AIs, but things a bit more powerful than Office365 Copilot can be sufficient for CEO (or real stakeholders behind) to run the whole company, or for an autocratic president to run enough of the industry to make her a yacht and some caviar.

Question: are any of my two assumptions or two intuitions or the conclusion wrong?

What are some falsifiable, observable predictions of them, which I could verify using internet today?