Humans, chimpanzees and other animals

[Epistemic status: underinformed musings; I am posting this not because I am sure of anything in it but because the point seems important and I don’t recall seeing anyone else make it. Maybe that’s because it’s wrong.]

A common analogy for the relationship between postulated superhumanly-smart AIs and humans is the relationship between humans and chimpanzees. See, e.g., https://​​intelligence.org/​​2017/​​12/​​06/​​chollet/​​ where Eliezer Yudkowsky counters various arguments made by François Chollet by drawing this analogy.

It’s pretty compelling. Humans are a bit like chimps but substantially smarter … and, lo, humans control the future of life on earth (including, in particular, the future of chimpanzee life on earth) in a way chimps absolutely do not.

But wait. Humans coexist with chimps, and are smarter, and are utterly dominant over them: fair enough. But surely we want to do better than a sample size of 1. What other cases are there where animals of different intelligence levels coexist?

Well, for instance, chimps coexist with lions and leopards. Are chimp-leopard relations anything like human-chimp or human-leopard relations? So far as I can tell, no. Chimps don’t appear to reshape their environment radically for their own purposes. When they encounter other animals such as lions and leopards they not infrequently get killed.

In general I’m not aware of any pattern of the form “smarter animals consistently face no threat from less-smart animals” or “smarter animals consistently wipe out less-smart animals that threaten them” or “the smartest type of animal in any area controls the whole local ecosystem”.

(I am not an expert. I could be all wrong about this. I will be glad of any corrections.)

What all this suggests, at least to me, is that what’s going on with humans and chimpanzees is not “smarter animal wins” but something more like “there is a qualitative difference between humans and other animals, such that animals on the smarter side of this divide win against ones on the less-smart side”.

There might be a similar divide between us and our prospective AI overlords. I can think of various things that might turn out to have that effect. But so far as I can tell, it’s a genuinely open question, and if there’s some reason to be (say) 90% confident that there will be such a divide I haven’t seen it.