A Parable of Elites and Takeoffs

Let me tell you a parable of the future. Let’s say, 70 years from now, in a large Western country we’ll call Nacirema.

One day far from now: scientific development has continued apace, and a large government project (with, unsurprisingly, a lot of military funding) has taken the scattered pieces of cutting-edge research and put them together into a single awesome technology, which could revolutionize (or at least, vastly improve) all sectors of the economy. Leading thinkers had long forecast that this area of science’s mysteries would eventually yield to progress, despite theoretical confusion and perhaps-disappointing initial results and the scorn of more conservative types and the incomprehension (or outright disgust, for ‘playing god’) of the general population, and at last—it had! The future was bright.

Unfortunately, it was hurriedly decided to use an early prototype outside the lab in an impoverished foreign country. Whether out of arrogance, bureaucratic inertia, overconfidence on the part of the involved researchers, condescending racism, the need to justify the billions of grant-dollars that cumulative went into the project over the years by showing some use of it—whatever, the reasons no longer mattered after the final order was signed. The technology was used, but the consequences turned out to be horrific: over a brief period of what seemed like mere days, entire cities collapsed and scores—hundreds—of thousands of people died. (Modern economies are extremely interdependent and fragile, and small disruptions can have large consequences; more people died in the chaos of the evacuation of the areas around Fukushima than will die of the radiation.)

An unmitigated disaster. Worse, the technology didn’t even accomplish the assigned goal—that was thanks to a third party’s actions! Ironic. But that’s how life goes: ‘Man Proposes, God Disposes’.

So, what to do with the tech? The positive potential was still there, but no one could doubt anymore that there was a horrific dark side: they had just seen what it could do if misused, even if the authorities (as usual) were spinning the events as furiously as possible to avoid frightening the public. You could put it under heavy government control, and they did.

But what was to stop Nacirema’s rivals from copying the technology and using it domestically or as a weapon against Nacirema? In particular, Nacirema’s enormous furiously-industrializing rival far to the East in Asia, which aspired to regional hegemony, had a long history of being an “oriental despotism” and still had a repressive political system—ruled by an opaque corrupt oligarchy—which abrogated basic human rights such as free speech, and was not a little racist/​xenophobic & angry at historical interference in its domestic affairs by Seilla & Nacirema…

The ‘arms race’ was obvious to anyone who thought about the issue. You had to obtain your own tech or be left in the dust. But an arms race was terrifyingly dangerous—one power with the tech was bad enough, but if there were two holders? A dozen? There was no reason to expect all the wishes to be benign once everyone had their own genie-in-a-bottle. It would not be hyperbolic to say that the fate of global civilization was at stake (even if there were survivors off-planet or in Hanson-style ‘disaster refuges’, they could hardly rebuild civilization on their own; not to mention that a lot of resources like hydrocarbons have already been depleted beyond the ability of a small primitive group to exploit) or maybe even the human race itself. If ever an x-risk was a clear and present danger, this was it.

Fortunately, the ‘hard take-off’ scenario did not come to pass, as each time it took years to double the power of the tech; nor was it something you could make in your bedroom, even if you knew the key insights (deducible by a grad student from published papers, as concerned agencies in Nacirema proved). Rather, the experts forecast a slower take-off, on a more human time-scale, where the technology escalated in power over the next two or three decades; importantly, they thought that the Eastern rival’s scientists would not be able to clone the technology for another decade or perhaps longer.

So one of the involved researchers—a bona fide world-renowned genius who had made signal contributions to the design of the computers and software involved and had the utmost credibility—made the obvious suggestion. Don’t let the arms race start. Don’t expose humanity to an unstable equilibrium of the sort which has collapsed many times in human history. Instead, Nacirema should boldly deliver an ultimatum to the rival: submit to examination and verification that they were not developing the tech, or be destroyed. Stop the contagion from spreading and root out the x-risk. Research in the area would be proscribed, as almost all of it was inherently dual-use.

Others disagreed, of course, with many alternative proposals: perhaps researchers could be trusted to self-regulate; or, related research could be regulated by a special UN agency; or the tech could be distributed to all major countries to reach an equilibrium immediately; or, treaties could be signed; or Nacirema could voluntarily abandon the technology, continue to do things the old-fashioned way, and lead by moral authority.

You might think that the politicians would do something, even if they ignored the genius: the prognostications of a few obscure researchers and of short stories published in science fiction had turned out to be truth; the dangers had been realized in practice, and there was no uncertainty about what a war with the tech would entail; the logic of the arms race has been well-documented by many instances to lead to instability and propel countries into war (consider the battleship arms race leading up to WWI); the proposer had impeccable credentials and deep domain-specific expertise and was far from alone in being deeply concerned about the issue; there were multiple years to cope with the crisis after fair warning had been given, so there was enough time; and so on. If the Nacireman political system were to ever be willing to take major action to prevent an x-risk, this would seem to be the ideal scenario. So did they?

Let’s step back a bit. One might have faith in the political elites of this country. Surely given the years of warning as the tech became more sophisticated, people would see that this time really was different, this time it was the gravest threat humanity had faced, that the warnings of elite scientists of doomsday would be taken seriously; surely everyone would see the truth of proposition X, leading them to endorse Y and agree with the ‘extremists’ about policy decision Z (to condense our hopes into one formula); how can we doubt that policy-makers and research funders would begin to respond to the tech safety challenge? After all, we can point to some other instances where policymakers reached good outcomes for minor problems like CFC damages to the atmosphere.

So with all that in mind, in our little future world, did the Nacireman political system respond effectively?

I’m a bit cynical, so let’s say the answer was… No. Of course not. They did not follow his plan.

And it’s not that they found a better plan, either. (Let’s face it, any plan calling for more war has to be considered a last resort, even if you have a special new tech to help, and is likely to fail.) Nothing meaningful was done. “Man plans, God laughs.” The trajectory of events was indistinguishable from bureaucratic inertia, self-serving behavior by various groups, and was the usual story. After all, what was in it for the politicians? Did such a strategy swell any corporation’s profits? Or offer scope for further taxation & regulation? Or could it be used to appeal to anyone’s emotion-driven ethics by playing on disgust or purity or in-group loyalty? The strategy had no constituency except those who were concerned by an abstract threat in the future (perhaps, as their opponents insinuated, they were neurotic ‘hawks’ hellbent on war). Besides, the Nacireman people were exhausted from long years of war in multiple foreign countries and a large domestic depression whose scars remained. Time passed.

Eventually the experts turned out to be wrong but in the worst possible way: the rival took half the time projected to develop their own tech, and the window of opportunity snapped shut. The arms race had begun, and humanity would tremble in fear, as it wondered if it would live out the century or the unthinkable happen.

Good luck, you people of the future! I wish you all the best, although I can’t be optimistic; if you survive, it will be by the skin of your teeth, and I suspect that due to hindsight bias and near-miss bias, you won’t even be able to appreciate how dire the situation was afterwards and will forget your peril or minimize the danger or reason that the tech couldn’t have been that dangerous since you survived—which would be a sad & pathetic coda indeed.

The End.

(Oh, I’m sorry. Did I write “70 years from now”? I meant: “70 years ago”. The technology is, of course, nuclear fission which had many potential applications in civilian economy—if nothing else, every sector benefits from electricity ‘too cheap to meter’; Nacirema is America & the eastern rival is Russia; the genius is John von Neumann, the SF stories were by Heinlein & Cartmill among others—the latter giving rise to the Astounding incident; and we all know how the Cold War led civilization to the brink of thermonuclear war. Why, did you think it was about something else?)

This was written for a planned essay on why computational complexity/​diminishing returns doesn’t imply AI will be safe, but who knows when I’ll finish that, so I thought I’d post it separately.