I think the best justification is by analogy. Humans do not physically have a decisive strategic advantage over other large animals—chimps, lions, elephants, etc. And for hundreds of thousands of years, we were not at the top of the food chain, despite our intelligence. However, intelligence eventually won out, and allowed us to conquer the planet.
Moreover, the benefit of intelligence increased exponentially in proportion to the exponential advance of technology. There was a long, slow burn, followed by what (on evolutionary timescales) was an extremely “fast takeoff”: a very rapid improvement in technology (and thus power) over only a few hundred years. Technological progress is now so rapid that human minds have trouble keeping up within a single lifetime, and genetic evolution has been left in the dust.
That’s the world into which AGI will enter—a technological world in which a difference in intellectual ability can be easily translated into a difference in technological ability, and thus power. Any future technologies that the laws of physics don’t explicitly prohibit, we must assume that an AGI will master faster than we can.
This is an excellent question. I’d say the main reason is that all of the AI/ML systems that we have built to date are utility maximizers; that’s the mathematical framework in which they have been designed. Neural nets / deep-learning work by using a simple optimizer to find the minimum of a loss function via gradient descent. Evolutionary algorithms, simulated annealing, etc. find the minimum (or maximum) of a “fitness function”. We don’t know of any other way to build systems that learn.
Humans themselves evolved to maximize reproductive fitness. In the case of humans, our primary fitness function is reproductive fitness, but our genes have encoded a variety of secondary functions which (over evolutionary time) have been correlated with reproductive fitness. Our desires for love, friendship, happiness, etc. fall into this category. Our brains mainly work to satisfy these secondary functions; the brain gets electrochemical reward signals, controlled by our genes, in the form of pain/pleasure/satisfaction/loneliness etc. These secondary functions may or may not remain aligned with the primary loss function, which is why practitioners sometimes talk about “mesa-optimizers” or “inner vs outer alignment.”