🤔 Coordination explosion before intelligence explosion...?

Epistemic status: musing that I wanted to throw out there.

A traditional AI risk worry has been the notion of an intelligence explosion: An AI system will rapidly grow in intelligence and become able to make huge changes using small[1] subtle[2] tricks such as bioengineering or hacking. Since small actions are not that tightly regulated, these huge changes would be made in a relatively unregulated way, probably destroying a lot of things, maybe even the entire world or human civilization.

Modern AI systems such as LLMs seem to be making rapid progress in turning sensory data into useful information, aggregating information from messy sources, processing information in commonsense ways, and delivering information to people. These abilities do not seem likely to generalize to bioengineering or hacking (which involve generating novel capabilities), but they do seem plausibly useful for some things.


Two scenarios of interest:

Coordination implosion: Some people suggest that because modern AI systems are extremely error-prone, they will not be useful, except for stuff like spam, which degrades our coordination abilities. I’m not sure this scenario is realistic because there seem to be a lot of people working on making it work for useful stuff.

Coordination explosion: By being able to automatically do basic information processing, it seems like we might be able to do better coordination. We are already seeing this with chatbots that work as assistants, sometimes being able to give useful advice based on their mountains of integrated knowledge. But we could imagine going further, e.g. by automatically registering people’s experiences and actions, and aggregating this information and routing it to relevant places.

(For instance, maybe a software company installs AI-based surveillance, and this surveillance notices when developers encounter bugs, and takes note of how they solve the bugs so that it can advise future developers who encounter similar bugs about what to do.)

This might revolutionize the way we act. Rather than having to create, spread, and collect information, maybe we would end up always having relevant information at hand, ready for our decisions. With a bit of rationing, we might even be able to keep spam down to a workable level.


I’m not particularly sure this is what things are going to look like. However I think the possibility is useful to keep in mind: There may be an intermediate phase between “full AGI” and now, where we have a sort of transformative artificial intelligence, but not in the sense of leading to an intelligence explosion. There may still be an intelligence explosion afterwards. Or not, if you don’t believe in intelligence explosions.

I foresee privacy to be one counteracting force. These sorts of systems seem like they work better when they invade your privacy more, so people will resist that.

  1. ^

    Small = Involving relatively minor changes in terms of e.g. matter manually moved.

  2. ^

    Subtle = Dependent on getting many “bits” right at a distance.