This Failing Earth

Suppose I told you about a certain country, somewhere in the world, in which some of the cities have degenerated into gang rule. Some such cities are ruled by a single gang leader, others have degenerated into almost complete lawlessness. You would probably conclude that the cities I was talking about were located inside what we call a “failed state”.

So what does the existence of North Korea say about this Earth?

No, it’s not a perfect analogy. But the thought does sometimes occur to me, to wonder if the camel has two humps. If there are failed Earths and successful Earths, in the great macroscopic superposition popularly known as “many worlds”—and we’re not one of the successful. I think of this as the “failed Earth” hypothesis.

Of course the camel could also have three or more humps, and it’s quite easy to imagine Earths that are failing much worse than this, epic failed Earths ruled by the high-tech heirs of Genghis Khan or the Catholic Church. Oh yes, it could definitely be worse...

...and the “failed state” analogy is hardly perfect; “failed state” usually refers to failure to integrate into the global economy, but a failed Earth is not failing to integrate into anything larger...

...but the question does sometimes haunt me, as to whether in the alternative Everett branches of Earth, we could identify a distinct cluster of “successful” Earths, and we’re not in it. It may not matter much in the end; the ultimate test of a planet’s existence probably comes down to Friendly AI, and Friendly AI may come down to nine people in a basement doing math. I keep my hopes up, and think of this as a “failing Earth” rather than a “failed Earth”.

But it’s a thought that comes to mind, now and then. Reading about the ongoing Market Complexity Collapse and wondering if this Earth failed to solve one of the basic functions of global economics, in the same way that Rome, in its later days, failed to solve the problem of orderly transition of power between Caesars.

Of course it’s easy to wax moralistic about people who aren’t solving their coordination problems the way you like. I don’t mean this to degenerate into a standard diatribe about the sinfulness of this Earth, the sort of clueless plea embodied perfectly by Simon and Garfunkel:

I dreamed I saw a mighty room
The room was filled with men
And the paper they were signing said
They’d never fight again

It’s a cheap pleasure to wax moralistic about failures of global coordination.

But visualizing the alternative Everett branches of Earth, spread out and clustered—for me, at least, that seems to help trigger my mind into a non-Simon-and-Garfunkel mode of thinking. If the successful Earths lack a North Korea, how did they get there? Surely not just by signing a piece of paper saying they’d never fight again.

Indeed, our Earth’s Westphalian concept of sovereign states is the main thing propping up Somalia and North Korea. There was a time when any state that failed that badly would be casually conquered by a more successful neighbor. So maybe the successful Earths don’t have a Westphalian concept of sovereignty; maybe our Earth’s concept of inviolable borders represents a failure to solve one of the key functions of a planetary civilization.

Maybe the successful Earths are the ones where the ancient Greeks, or equivalent thereof, had the “Aha!” of Darwinian evolution… and at least one country started a eugenics program that successfully selected for intelligence, well in advance of nuclear weapons being developed. If that makes you uncomfortable, it’s meant to—the successful Earths may not have gotten there through Simon and Garfunkel. And yes, of course the ancient Greeks attempting such a policy could and probably would have gotten it terribly wrong; maybe the epic failed Earths are the ones where some group had the Darwinian insight and then successfully selected for prowess as warriors. I’m not saying “Go eugenics!” would have been a systematically good idea for ancient Greeks to try as policy...

But maybe the top cluster of successful Earths, among human Everett branches, stumbled into that cluster because some group stumbled over eugenic selection for intelligence, and then, being a bit smarter, realized what it was they were doing right, so that the average IQ got up to 140 well before anyone developed nuclear weapons. (And then conquered the world, rather than respecting the integrity of borders.)

What would a successful Earth look like? How high is their standard sanity waterline? Are there large organized religions in successful Earths—is their presence here a symptom of our failure to solve the problems of a planetary civilization? You can ring endless changes on this theme, and anyone with an accustomed political hobbyhorse is undoubtedly imagining their pet Utopia already. For my own part, I’ll go ahead and wonder, if there’s an identifiable “successful” cluster among the human Earths, what percentage of them have worldwide cryonic preservation programs in place.

One point that takes some of the sting out of our ongoing muddle—at least from my perspective—is my suspicion that the Earths in the successful cluster, even those with an average IQ of 140 as they develop computers, may not be in much of a better position to really succeed, to solve the Friendly AI problem. A rising tide lifts all boats, and Friendly AI is a race between cautiously developed AI and insufficiently-cautiously-developed AI. “Successful” Earths might even be worse off, if they solve their global coordination problems well enough to put the whole world’s eyes on the problem and turn the development over to prestigious bureaucrats. It’s not a simple issue like cryonics that we’re talking about. If, in the end, “successful Earths” of the human epoch aren’t in a much better position for the catastrophically high-level pass-fail test of the posthuman transition, than our own “failing Earth”… then this Earth isn’t all that much more doomed just because we screwed up our financial system, international relations, and basic rationality training.

Is such speculation at all useful? “Live in your own world”, as the saying goes...

...Well, it might not be a saying here, but it’s probably a saying in those successful Earths where the scientific community is long since trained in formal Bayesianism and they readily accepted the obvious truth of many-worlds… as opposed to our own world and its constantly struggling academia where senior scientists spend most of their time writing grant proposals...

(Michael Vassar has an extended thesis on how the scientific community in our Earth has been slowly dying since 1910 or so, but I’ll let him decide whether it’s worth his time to write up that post.)

It’s usually not my intent to depress people. I have an accustomed saying that if you want to depress yourself, look at the future, and if you want to cheer yourself up, look at the past. By analogy—well, for all we know, we might be in the second-highest major cluster, or in the top 10% of all Earths even if not one of the top 1%. It might be that most Earths have global orders descended from the conquering armies of the local Church. I recently had occasion to visit the National Museum of Australia in Canberra, and it’s shocking to think of how easily a human culture can spend thirty thousand years without inventing the bow and arrow. Really, we did do quite well for ourselves in a lot of ways… I think?

A sense of beleaguredness, a sense that everything is decaying and dying into sinfulness—these memes are more useful for gluing together cults than for inspiring people to solve their coordination problems.

But even so—it’s a thought that I have, when I see some aspect of the world going epically awry, to wonder if we’re in the cluster of Earths that fail. It’s the sort of thought that inspires me, at least, to go down into that basement and solve the math problem and make everything come out all right anyway. Because if there’s one thing that the intelligence explosion really messes up, it’s the dramatic unity of human progress—if this were a world with a supervised course of history we’d be worrying about making it to Akon’s world through a continuous developmental schema, not making a sudden left turn to solve a math problem.

It may be that in the fractiles of the human Everett branches, we live in a failing Earth—but it’s not failed until someone messes up the first AI. I find that a highly motivating thought. Your mileage may vary.