This Failing Earth

Sup­pose I told you about a cer­tain coun­try, some­where in the world, in which some of the cities have de­gen­er­ated into gang rule. Some such cities are ruled by a sin­gle gang leader, oth­ers have de­gen­er­ated into al­most com­plete lawless­ness. You would prob­a­bly con­clude that the cities I was talk­ing about were lo­cated in­side what we call a “failed state”.

So what does the ex­is­tence of North Korea say about this Earth?

No, it’s not a perfect anal­ogy. But the thought does some­times oc­cur to me, to won­der if the camel has two humps. If there are failed Earths and suc­cess­ful Earths, in the great macro­scopic su­per­po­si­tion pop­u­larly known as “many wor­lds”—and we’re not one of the suc­cess­ful. I think of this as the “failed Earth” hy­poth­e­sis.

Of course the camel could also have three or more humps, and it’s quite easy to imag­ine Earths that are failing much worse than this, epic failed Earths ruled by the high-tech heirs of Genghis Khan or the Catholic Church. Oh yes, it could definitely be worse...

...and the “failed state” anal­ogy is hardly perfect; “failed state” usu­ally refers to failure to in­te­grate into the global econ­omy, but a failed Earth is not failing to in­te­grate into any­thing larger...

...but the ques­tion does some­times haunt me, as to whether in the al­ter­na­tive Everett branches of Earth, we could iden­tify a dis­tinct cluster of “suc­cess­ful” Earths, and we’re not in it. It may not mat­ter much in the end; the ul­ti­mate test of a planet’s ex­is­tence prob­a­bly comes down to Friendly AI, and Friendly AI may come down to nine peo­ple in a base­ment do­ing math. I keep my hopes up, and think of this as a “failing Earth” rather than a “failed Earth”.

But it’s a thought that comes to mind, now and then. Read­ing about the on­go­ing Mar­ket Com­plex­ity Col­lapse and won­der­ing if this Earth failed to solve one of the ba­sic func­tions of global eco­nomics, in the same way that Rome, in its later days, failed to solve the prob­lem of or­derly tran­si­tion of power be­tween Cae­sars.

Of course it’s easy to wax moral­is­tic about peo­ple who aren’t solv­ing their co­or­di­na­tion prob­lems the way you like. I don’t mean this to de­gen­er­ate into a stan­dard di­a­tribe about the sin­ful­ness of this Earth, the sort of clue­less plea em­bod­ied perfectly by Si­mon and Gar­funkel:

I dreamed I saw a mighty room
The room was filled with men
And the pa­per they were sign­ing said
They’d never fight again

It’s a cheap plea­sure to wax moral­is­tic about failures of global co­or­di­na­tion.

But vi­su­al­iz­ing the al­ter­na­tive Everett branches of Earth, spread out and clus­tered—for me, at least, that seems to help trig­ger my mind into a non-Si­mon-and-Gar­funkel mode of think­ing. If the suc­cess­ful Earths lack a North Korea, how did they get there? Surely not just by sign­ing a piece of pa­per say­ing they’d never fight again.

In­deed, our Earth’s West­phalian con­cept of sovereign states is the main thing prop­ping up So­ma­lia and North Korea. There was a time when any state that failed that badly would be ca­su­ally con­quered by a more suc­cess­ful neigh­bor. So maybe the suc­cess­ful Earths don’t have a West­phalian con­cept of sovereignty; maybe our Earth’s con­cept of in­vi­o­lable bor­ders rep­re­sents a failure to solve one of the key func­tions of a plane­tary civ­i­liza­tion.

Maybe the suc­cess­ful Earths are the ones where the an­cient Greeks, or equiv­a­lent thereof, had the “Aha!” of Dar­wi­nian evolu­tion… and at least one coun­try started a eu­gen­ics pro­gram that suc­cess­fully se­lected for in­tel­li­gence, well in ad­vance of nu­clear weapons be­ing de­vel­oped. If that makes you un­com­fortable, it’s meant to—the suc­cess­ful Earths may not have got­ten there through Si­mon and Gar­funkel. And yes, of course the an­cient Greeks at­tempt­ing such a policy could and prob­a­bly would have got­ten it ter­ribly wrong; maybe the epic failed Earths are the ones where some group had the Dar­wi­nian in­sight and then suc­cess­fully se­lected for prowess as war­riors. I’m not say­ing “Go eu­gen­ics!” would have been a sys­tem­at­i­cally good idea for an­cient Greeks to try as policy...

But maybe the top cluster of suc­cess­ful Earths, among hu­man Everett branches, stum­bled into that cluster be­cause some group stum­bled over eu­genic se­lec­tion for in­tel­li­gence, and then, be­ing a bit smarter, re­al­ized what it was they were do­ing right, so that the av­er­age IQ got up to 140 well be­fore any­one de­vel­oped nu­clear weapons. (And then con­quered the world, rather than re­spect­ing the in­tegrity of bor­ders.)

What would a suc­cess­ful Earth look like? How high is their stan­dard san­ity wa­ter­line? Are there large or­ga­nized re­li­gions in suc­cess­ful Earths—is their pres­ence here a symp­tom of our failure to solve the prob­lems of a plane­tary civ­i­liza­tion? You can ring end­less changes on this theme, and any­one with an ac­cus­tomed poli­ti­cal hob­by­horse is un­doubt­edly imag­in­ing their pet Utopia already. For my own part, I’ll go ahead and won­der, if there’s an iden­ti­fi­able “suc­cess­ful” cluster among the hu­man Earths, what per­centage of them have wor­ld­wide cry­onic preser­va­tion pro­grams in place.

One point that takes some of the sting out of our on­go­ing mud­dle—at least from my per­spec­tive—is my sus­pi­cion that the Earths in the suc­cess­ful cluster, even those with an av­er­age IQ of 140 as they de­velop com­put­ers, may not be in much of a bet­ter po­si­tion to re­ally suc­ceed, to solve the Friendly AI prob­lem. A ris­ing tide lifts all boats, and Friendly AI is a race be­tween cau­tiously de­vel­oped AI and in­suffi­ciently-cau­tiously-de­vel­oped AI. “Suc­cess­ful” Earths might even be worse off, if they solve their global co­or­di­na­tion prob­lems well enough to put the whole world’s eyes on the prob­lem and turn the de­vel­op­ment over to pres­ti­gious bu­reau­crats. It’s not a sim­ple is­sue like cry­on­ics that we’re talk­ing about. If, in the end, “suc­cess­ful Earths” of the hu­man epoch aren’t in a much bet­ter po­si­tion for the catas­troph­i­cally high-level pass-fail test of the posthu­man tran­si­tion, than our own “failing Earth”… then this Earth isn’t all that much more doomed just be­cause we screwed up our fi­nan­cial sys­tem, in­ter­na­tional re­la­tions, and ba­sic ra­tio­nal­ity train­ing.

Is such spec­u­la­tion at all use­ful? “Live in your own world”, as the say­ing goes...

...Well, it might not be a say­ing here, but it’s prob­a­bly a say­ing in those suc­cess­ful Earths where the sci­en­tific com­mu­nity is long since trained in for­mal Bayesi­anism and they read­ily ac­cepted the ob­vi­ous truth of many-wor­lds… as op­posed to our own world and its con­stantly strug­gling academia where se­nior sci­en­tists spend most of their time writ­ing grant pro­pos­als...

(Michael Vas­sar has an ex­tended the­sis on how the sci­en­tific com­mu­nity in our Earth has been slowly dy­ing since 1910 or so, but I’ll let him de­cide whether it’s worth his time to write up that post.)

It’s usu­ally not my in­tent to de­press peo­ple. I have an ac­cus­tomed say­ing that if you want to de­press your­self, look at the fu­ture, and if you want to cheer your­self up, look at the past. By anal­ogy—well, for all we know, we might be in the sec­ond-high­est ma­jor cluster, or in the top 10% of all Earths even if not one of the top 1%. It might be that most Earths have global or­ders de­scended from the con­quer­ing armies of the lo­cal Church. I re­cently had oc­ca­sion to visit the Na­tional Mu­seum of Aus­tralia in Can­berra, and it’s shock­ing to think of how eas­ily a hu­man cul­ture can spend thirty thou­sand years with­out in­vent­ing the bow and ar­row. Really, we did do quite well for our­selves in a lot of ways… I think?

A sense of be­leagured­ness, a sense that ev­ery­thing is de­cay­ing and dy­ing into sin­ful­ness—these memes are more use­ful for glu­ing to­gether cults than for in­spiring peo­ple to solve their co­or­di­na­tion prob­lems.

But even so—it’s a thought that I have, when I see some as­pect of the world go­ing epi­cally awry, to won­der if we’re in the cluster of Earths that fail. It’s the sort of thought that in­spires me, at least, to go down into that base­ment and solve the math prob­lem and make ev­ery­thing come out all right any­way. Be­cause if there’s one thing that the in­tel­li­gence ex­plo­sion re­ally messes up, it’s the dra­matic unity of hu­man progress—if this were a world with a su­per­vised course of his­tory we’d be wor­ry­ing about mak­ing it to Akon’s world through a con­tin­u­ous de­vel­op­men­tal schema, not mak­ing a sud­den left turn to solve a math prob­lem.

It may be that in the frac­tiles of the hu­man Everett branches, we live in a failing Earth—but it’s not failed un­til some­one messes up the first AI. I find that a highly mo­ti­vat­ing thought. Your mileage may vary.