The most likely way that things go very bad is conflict between AIs-who-care-more-about-humans and AIs-who-care-less-about-humans wherein the latter pessimize the former. There are game-theoretic models which predict this may happen, and the history of human conflict shows that these predictions bare out even when the agents are ordinary human-level intelligences who can’t read each other’s source-code.
My best guess is that the acausal dynamics between superintelligences shakes out well. But the causal dynamics between ordinary human-level AIs probably shakes out bad. This is my best case against mixed deployment.
The Case against Mixed Deployment
The most likely way that things go very bad is conflict between AIs-who-care-more-about-humans and AIs-who-care-less-about-humans wherein the latter pessimize the former. There are game-theoretic models which predict this may happen, and the history of human conflict shows that these predictions bare out even when the agents are ordinary human-level intelligences who can’t read each other’s source-code.
My best guess is that the acausal dynamics between superintelligences shakes out well. But the causal dynamics between ordinary human-level AIs probably shakes out bad. This is my best case against mixed deployment.