Artificial intelligence might end the world. More likely, it will crush our ability to make sense of the world—and so will crush our ability to act in it.
AI will make critical decisions that we cannot understand. Governments will take radical actions that make no sense to their own leaders. Corporations, guided by artificial intelligence, will find their own strategies incomprehensible. University curricula will turn bizarre and irrelevant. Formerly-respected information sources will publish mysteriously persuasive nonsense. We will feel our loss of understanding as pervasive helplessness and meaninglessness. We may take up pitchforks and revolt against the machines—and in so doing, we may destroy the systems we depend on for survival...
We don’t know how our AI systems work, we don’t know what they can do, and we don’t know what broader effects they will have. They do seem startlingly powerful, and a combination of their power with our ignorance is dangerous...
In our absence of technical understanding, those concerned with future AI risks have constructed “scenarios”: stories about what AI may do… So far, we’ve accumulated a few dozen reasonably detailed, reasonably plausible bad scenarios. We’ve found zero that lead to good outcomes… Unless we can find some specific beneficial path, and can gain some confidence in taking it, we should shut AI down.
This book considers scenarios that are less bad than human extinction, but which could get worse than run-of-the-mill disasters that kill only a few million people.
Previous discussions have mainly neglected such scenarios. Two fields have focused on comparatively smaller risks, and extreme ones, respectively. AI ethics concerns uses of current AI technology by states and powerful corporations to categorize individuals unfairly, particularly when that reproduces preexisting patterns of oppressive demographic discrimination. AI safety treats extreme scenarios involving hypothetical future technologies which could cause human extinction. It is easy to dismiss AI ethics concerns as insignificant, and AI safety concerns as improbable. I think both dismissals would be mistaken. We should take seriously both ends of the spectrum.
However, I intend to draw attention to a broad middle ground of dangers: more consequential than those considered by AI ethics, and more likely than those considered by AI safety. Current AI is already creating serious, often overlooked harms, and is potentially apocalyptic even without further technological development. Neither AI ethics nor AI safety has done much to propose plausibly effective interventions.
We should consider many such scenarios, devise countermeasures, and implement them.