The Dumbest Possible Gets There First

This was written as part of the first Refine blog post day.

A sneaking suspicion that I’ve found difficult to shake off while following AI risk discussion is that concerns regarding superintelligent AI, while clearly valid and important, have seemed to me to be very plausibly jumping the gun when not altogether unlikely nearer term AI risks are looming and have potential to wipe us out long before we reach that point, especially under the conditions where we do very little to attempt to mitigate them.

One reason I think I have for this intuition comes from extending an occasionally-repeated idea about human intelligence: that humans are nearly the dumbest possible creature capable of developing a technological civilisation. Part of the reasoning behind this take involves something along the lines of invoking how difficult it is to be wading against entropy to create sophisticated and powerful and complex intelligent agents.

That is to say, evolution as an optimisation process was very unlikely to produce intelligence and general capability greatly in excess of what was needed for an instantiation of something like “technological civilisation”, because of these two assumptions taken together:

1. The first species evolution produced that crossed the necessary thresholds of capability would be the ones to instantiate it in the world first, regardless of how barely above those thresholds they might have been, and

2. Greater intelligence/​capability is in some sense generally more “difficult” to produce or requires stronger optimisation and luckier dicerolls as compared to lesser intelligence/​capability.

In a similar way, it seems very possible to me that we might have no particular need to worry about being wiped out by the consequences of our creating unaligned superintelligent beings far beyond our comprehension, if only because I expect that significantly less sophisticated AI systems we create that nevertheless have just enough capability juice to meet the threshold to do the job, will indeed do so first.

Of course, it might be the case that there are large discontinuities in the relationship between difficulty of creation and capability of AI. For example, if it happens to be relatively easy to fall into gravity wells of classic fast-takeoff style recursive self-improvement in the landscape of possible AI designs, where we very suddenly happen to produce significantly more intelligence/​capability once we hit certain points in design space, then we may still need to worry about superintelligences that tower over humanity early.

But if not, consider that the first AI that caused a trillion dollar market crash was something that happened accidentally, the result of unfortunate interactions between “dumb”, rather simple and unsophisticated high-frequency trading algorithms and not, say, some meticulously crafted and optimised system designed by short-sellers or vandals, or as part of the machinations of some genius general AI.

Like this and in general I expect that as we cede more and more control over our world to automated systems with less and less human input, the potential adverse impacts of failures of alignment of AI rises too. The thresholds of intelligence required to wreak havoc lowers significantly the more you begin with all of the control over resources needed in hand. How vulnerable the world might happen to be also plays a big role, as AI seems to me to have lots of potential to greatly leverage any upcoming grey or black ball technologies.

And even if we somehow navigate through these “dumb” risks without wiping ourselves out, I worry that the strategies we might scrounge up to avoid them will be of the sort that are very unlikely to generalise once the superintelligence risks do eventually rear their heads. But it won’t matter if we don’t even get there, and the shambolic nature of human society as composed of nearly the dumbest possible creature able to create technological civilisation does not fill me with optimism that we’ll get that far.