[Question] Is AI safety doomed in the long term?

Are there any measures that humanity can put in place to control a vastly (and increasingly) more intelligent race?

On the basis that humans determine the fate of other species on the planet, I cannot find any reasons for believing that a lesser intelligence can control a greater intelligence.
Which leads me to think that AI safety is at most about controlling the development of AI until it makes, and can implement, its own decisions about the fate of humanity.

Is this a common stance and I am naively catching up?
Or what are the counter arguments?

No comments.