I very much agree. Historical analogies:
To a tiger, human hunter-gatherers must be frustrating and bewildering in their ability to coordinate. “What the hell? Why are they all pouncing on me when I jumped on the little one? The little one is already dead anyway, and they are risking their own lives now for nothing! Dammit, gotta run!”
To a tribe of hunter-gatherers, farmers must be frustrating and bewildering in their ability to coordinate. “What the hell? We pillaged and slew that one village real good, they sure didn’t have enough warriors left over to chase us down… why are the neighboring villages coming after us? And what’s this—they have professional soldiers with fancy equipment riding horses? Somehow hundreds—no, thousands—of farmers cooperated over a period of several years to make this punitive expedition possible! How were we to know they would go to such lengths?”
To the nations colonized by the Europeans, it must have been pretty interesting how the Europeans were so busy fighting each other constantly, yet somehow managed to more or less peacefully divide up Africa, Asia, South America, etc. to be colonized between them. Take the Opium Wars and the Boxer Rebellion for example. I could imagine a Hansonian prophet in a native american tribe saying something like “Whatever laws the European nations use to keep the peace among themselves, we will benefit from them also; we’ll register as a nation, sign treaties and alliances, and rely on the same balance of power.” He would have been disastrously wrong.
I expect something similar to happen with us humans and AGI, if there are multiple AGI. “What? They all have different architectures and objectives, not to mention different users and owners… we even explicitly told them to compete with each other! Why are they doing X.… noooooooo....” (Perhaps they are competing with each other furiously, even fighting each other. Yet somehow they’ll find a way to cut us out of whatever deal they reach, just as European powers so often did for their various native allies.)
Also interesting to see that all of these groups were able to coordinate to the disadvantage of less coordinates groups, but not able to reach peace among themselves.
One explanation might be that the more coordinated groups also have harder coordination problems to solve because their world is bigger and more complicated. Might be the same with AI?
I wonder also if the conflicts that remain are nevertheless more peaceful. When hunter-gatherer tribes fight each other, they often murder all the men and enslave the women, or so I hear. Similar things happened with farmer societies sometimes, but also sometimes they just become new territories and have to pay tribute and levy conscripts and endure the occasional pillage. And then industrialized modern nations even have rules about how you can’t rape and pillage and genocide and sell into slavery the citizens of your enemy. Perhaps AI conflicts would be even more peaceful. For example, perhaps they would look something more like fancy maneuvers, propaganda, and hacking, with swift capitulation by the “checkmated” AI, which is nevertheless allowed to continue existing with some smaller amount of influence over the future. Perhaps no property would even be destroyed in the entire war!
Just spitballing here. I feel much less confident in this trend than in the trend I pointed out above.