[Question] Is AI safety doomed in the long term?

Are there any mea­sures that hu­man­ity can put in place to con­trol a vastly (and in­creas­ingly) more in­tel­li­gent race?

On the ba­sis that hu­mans de­ter­mine the fate of other species on the planet, I can­not find any rea­sons for be­liev­ing that a lesser in­tel­li­gence can con­trol a greater in­tel­li­gence.
Which leads me to think that AI safety is at most about con­trol­ling the de­vel­op­ment of AI un­til it makes, and can im­ple­ment, its own de­ci­sions about the fate of hu­man­ity.

Is this a com­mon stance and I am naively catch­ing up?
Or what are the counter ar­gu­ments?

No comments.