[Question] Estimating Returns to Intelligence vs Numbers, Strength and Looks

A key assumption in most x-risk arguments for AI is that the ability of an agent to exert control over the world increases rapidly with intelligence. After all, AI safety would be easy if all it required was ensuring that people remain far more numerous and physically capable than the AI or even ensuring that the total computational power available to AI agents is small compared to that available to humanity.

What these arguments require is that a single highly (but not infinitely) intelligent agent will have be able to overwhelm the advantages humans might retain in terms of numbers, looks and computational power either by manipulating people to do it’s bidding or hacking other systems. However, I’ve yet to see any attempt to quantify the relationship between intelligence and control assumed in these arguments.

It occurs to me that we have information about these relationships that can inform such assumptions. For instance, if we wish to estimate the returns to intelligence in hacking we look at how the number of exploits discovered by researchers varies with their intelligence.

To estimate the returns to intelligence in terms of manipulation we could at the distribution of intelligence in highly effective politicians/​media personalities and compare it to other traits like height or looks. Or even, if we assume that evolution largely selects for ability to influence others, look at the distribution of these traits in the population.

I realize that doing this would probably require a number of substantial assumptions but I’m curious if anyone has tried. And yes I realize this entirely ignores the issue of defining intelligence beyond human capability (though if the notion has any validity we could probably use something like the rate at which unknown theorems, weighted by importance, can be proved).