Glenn Beck discusses the Singularity, cites SI researchers

From the fi­nal chap­ter of his new book Cowards, ti­tled “Adapt or Die: The Com­ing In­tel­li­gence Ex­plo­sion.”

The year is 1678 and you’ve just ar­rived in England via a time ma­chine. You take out your new iPhone in front of a group of sci­en­tists who have gath­ered to mar­vel at your ar­rival.

“Siri,” you say, ad­dress­ing the phone’s voice-ac­ti­vated ar­tifi­cial in­tel­li­gence sys­tem, “play me some Beethoven.”

Dunh-Dunh-Dunh-Du­u­unnnhhh! The fa­mous open­ing notes of Beethoven’s Fifth Sym­phony, stored in your mu­sic library, play loudly.

“Siri, call my mother.”

Your mother’s face ap­pears on the screen, a Hawaiian beach be­hind her. “Hi, Mom!” you say. “How many fingers am I hold­ing up?”

“Three,” she cor­rectly an­swers. “Why haven’t you called more—”

“Thanks, Mom! Gotta run!” you in­ter­rupt, hang­ing up.

“Now,” you say. “Watch this.”

Your new friends look at the iPhone ex­pec­tantly.

“Siri, I need to hide a body.”

Without hes­i­ta­tion, Siri asks: “What kind of place are you look­ing for? Mines, reser­voirs, metal foundries, dumps, or swamps?” (I’m not kid­ding. If you have an iPhone 4S, try it.)

You re­spond “Swamps,” and Siri pulls up a satel­lite map show­ing you nearby swamps.

The sci­en­tists are shocked into silence. What is this thing that plays mu­sic, in­stantly tele­ports video of some­one across the globe, helps you get away with mur­der, and is small enough to fit into a pocket?

At best, your sev­en­teenth-cen­tury friends would wor­ship you as a mes­sen­ger of God. At worst, you’d be burned at the stake for witchcraft. After all, as sci­ence fic­tion au­thor Arthur C. Clarke once said, “Any suffi­ciently ad­vanced tech­nol­ogy is in­dis­t­in­guish­able from magic.”

Now, imag­ine tel­ling this group that cap­i­tal­ism and rep­re­sen­ta­tive democ­racy will take the world by storm, lift­ing hun­dreds of mil­lions of peo­ple out of poverty. Imag­ine tel­ling them their de­scen­dants will erad­i­cate smal­l­pox and reg­u­larly live sev­enty-five or more years. Imag­ine tel­ling them that men will walk on the moon, that planes, fly­ing hun­dreds of miles an hour, will trans­port peo­ple around the world, or that cities will be filled with build­ings reach­ing thou­sands of feet into the air.

They’d prob­a­bly es­cort you to the mad­house.

Un­less, that is, one of the peo­ple in that group had been a man named Ray Kurzweil.

Kurzweil is an in­ven­tor and fu­tur­ist who has done a bet­ter job than most at pre­dict­ing the fu­ture. Dozens of the pre­dic­tions from his 1990 book The Age of In­tel­li­gent Machines came true dur­ing the 1990s and 2000s. His fol­low-up book, The Age of Spiritual Machines, pub­lished in 1999, fared even bet­ter. Of the 147 pre­dic­tions that Kurzweil made for 2009, 78 per­cent turned out to be en­tirely cor­rect, and an­other 8 per­cent were roughly cor­rect. For ex­am­ple, even though ev­ery portable com­puter had a key­board in 1999, Kurzweil pre­dicted that most portable com­put­ers would lack a key­board by 2009. It turns out he was right: by 2009, most portable com­put­ers were MP3 play­ers, smart­phones, tablets, portable game ma­chines, and other de­vices that lacked key­boards.

Kurzweil is most fa­mous for his “law of ac­cel­er­at­ing re­turns,” the idea that tech­nolog­i­cal progress is gen­er­ally “ex­po­nen­tial” (like a hockey stick, curv­ing up sharply) rather than “lin­ear” (like a straight line, ris­ing slowly). In nongeek-speak that means that our knowl­edge is like the com­pound in­ter­est you get on your bank ac­count: it in­creases ex­po­nen­tially as time goes on be­cause it keeps build­ing on it­self. We won’t ex­pe­rience one hun­dred years of progress in the twenty-first cen­tury, but rather twenty thou­sand years of progress (mea­sured at to­day’s rate).

Many ex­perts have crit­i­cized Kurzweil’s fore­cast­ing meth­ods, but a care­ful and ex­ten­sive re­view of tech­nolog­i­cal trends by re­searchers at the Santa Fe In­sti­tute came to the same ba­sic con­clu­sion: tech­nolog­i­cal progress gen­er­ally tends to be ex­po­nen­tial (or even faster than ex­po­nen­tial), not lin­ear.

So, what does this mean? In his 2005 book The Sin­gu­lar­ity Is Near, Kurzweil shares his pre­dic­tions for the next few decades:

  • In our cur­rent decade, Kurzweil ex­pects real-time trans­la­tion tools and au­to­matic house-clean­ing robots to be­come com­mon.

  • In the 2020s he ex­pects to see the in­ven­tion of tiny robots that can be in­jected into our bod­ies to in­tel­li­gently find and re­pair dam­age and cure in­fec­tions.

  • By the 2030s he ex­pects “mind up­load­ing” to be pos­si­ble, mean­ing that your mem­o­ries and per­son­al­ity and con­scious­ness could be copied to a ma­chine. You could then make backup copies of your­self, and achieve a kind of tech­nolog­i­cal im­mor­tal­ity.


Age of the Machines?

“We be­came the dom­i­nant species on this planet by be­ing the most in­tel­li­gent species around. This cen­tury we are go­ing to cede that crown to ma­chines. After we do that, it will be them steer­ing his­tory rather than us.”

—Jaan Tal­linn, co-cre­ator of Skype and Kazaa


If any of that sounds ab­surd, re­mem­ber again how ab­surd the erad­i­ca­tion of smal­l­pox or the iPhone 4S would have seemed to those sev­en­teenth-cen­tury sci­en­tists. That’s be­cause the hu­man brain is con­di­tioned to be­lieve that the past is a great pre­dic­tor of the fu­ture. While that might work fine in some ar­eas, tech­nol­ogy is not one of them. Just be­cause it took decades to put two hun­dred tran­sis­tors onto a com­puter chip doesn’t mean that it will take decades to get to four hun­dred. In fact, Moore’s Law, which states (roughly) that com­put­ing power dou­bles ev­ery two years, shows how tech­nolog­i­cal progress must be thought of in terms of “hockey stick” progress, not “straight line” progress. Moore’s Law has held for more than half a cen­tury already (we can cur­rently fit 2.6 billion tran­sis­tors onto a sin­gle chip) and there’s lit­tle rea­son to ex­pect that it won’t con­tinue to.

But the as­pect of his book that has the most far-rang­ing ram­ifi­ca­tions for us is Kurzweil’s pre­dic­tion that we will achieve a “tech­nolog­i­cal sin­gu­lar­ity” in 2045. He defines this term rather vaguely as “a fu­ture pe­riod dur­ing which the pace of tech­nolog­i­cal change will be so rapid, its im­pact so deep, that hu­man life will be ir­re­versibly trans­formed.”

Part of what Kurzweil is talk­ing about is based on an older, more pre­cise no­tion of “tech­nolog­i­cal sin­gu­lar­ity” called an in­tel­li­gence ex­plo­sion. An in­tel­li­gence ex­plo­sion is what hap­pens when we cre­ate ar­tifi­cial in­tel­li­gence (AI) that is bet­ter than we are at the task of de­sign­ing ar­tifi­cial in­tel­li­gences. If the AI we cre­ate can im­prove its own in­tel­li­gence with­out wait­ing for hu­mans to make the next in­no­va­tion, this will make it even more ca­pa­ble of im­prov­ing its in­tel­li­gence, which will . . . well, you get the point. The AI can, with enough im­prove­ments, make it­self smarter than all of us mere hu­mans put to­gether.

The re­ally ex­cit­ing part (or the scary part, if your vi­sion of the fu­ture is more like the movie The Ter­mi­na­tor) is that, once the in­tel­li­gence ex­plo­sion hap­pens, we’ll get an AI that is as su­pe­rior to us at sci­ence, poli­tics, in­ven­tion, and so­cial skills as your com­puter’s calcu­la­tor is to you at ar­ith­metic. The prob­lems that have oc­cu­pied mankind for decades— cur­ing dis­eases, find­ing bet­ter en­ergy sources, etc.— could, in many cases, be solved in a mat­ter of weeks or months.

Again, this might sound far-fetched, but Ray Kurzweil isn’t the only one who thinks an in­tel­li­gence ex­plo­sion could oc­cur some­time this cen­tury. Justin Rat­tner, the chief tech­nol­ogy officer at In­tel, pre­dicts some kind of Sin­gu­lar­ity by 2048. Michael Niel­sen, co-au­thor of the lead­ing text­book on quan­tum com­pu­ta­tion, thinks there’s a de­cent chance of an in­tel­li­gence ex­plo­sion by 2100. Richard Sut­ton, one of the biggest names in AI, pre­dicts an in­tel­li­gence ex­plo­sion near the mid­dle of the cen­tury. Lead­ing philoso­pher David Chalmers is 50 per­cent con­fi­dent an in­tel­li­gence ex­plo­sion will oc­cur by 2100. Par­ti­ci­pants at a 2009 con­fer­ence on AI tended to be 50 per­cent con­fi­dent that an in­tel­li­gence ex­plo­sion would oc­cur by 2045.

If we can prop­erly pre­pare for the in­tel­li­gence ex­plo­sion and en­sure that it goes well for hu­man­ity, it could be the best thing that has ever hap­pened on this frag­ile planet. Con­sider the differ­ence be­tween hu­mans and chim­panzees, which share 95 per­cent of their ge­netic code. A rel­a­tively small differ­ence in in­tel­li­gence gave hu­mans the abil­ity to in­vent farm­ing, writ­ing, sci­ence, democ­racy, cap­i­tal­ism, birth con­trol, vac­cines, space travel, and iPhones— all while chim­panzees kept fling­ing poo at each other.


In­tel­li­gent De­sign?

The thought that ma­chines could one day have su­per­hu­man abil­ities should make us ner­vous. Once the ma­chines are smarter and more ca­pa­ble than we are, we won’t be able to ne­go­ti­ate with them any more than chim­panzees can ne­go­ti­ate with us. What if the ma­chines don’t want the same things we do?

The truth, un­for­tu­nately, is that ev­ery kind of AI we know how to build to­day definitely would not want the same things we do. To build an AI that does, we would need a more flex­ible “de­ci­sion the­ory” for AI de­sign and new tech­niques for mak­ing sense of hu­man prefer­ences. I know that sounds kind of nerdy, but AIs are made of math and so math is re­ally im­por­tant for choos­ing which re­sults you get from build­ing an AI.

Th­ese are the kinds of re­search prob­lems be­ing tack­led by the Sin­gu­lar­ity In­sti­tute in Amer­ica and the Fu­ture of Hu­man­ity In­sti­tute in Great Bri­tain. Un­for­tu­nately, our silly species still spends more money each year on lip­stick re­search than we do on figur­ing out how to make sure that the most im­por­tant event of this cen­tury (maybe of all hu­man his­tory)— the in­tel­li­gence ex­plo­sion— ac­tu­ally goes well for us.


Like­wise, self-im­prov­ing ma­chines could perform sci­en­tific ex­per­i­ments and build new tech­nolo­gies much faster and more in­tel­li­gently than hu­mans can. Cur­ing can­cer, find­ing clean en­ergy, and ex­tend­ing life ex­pec­tan­cies would be child’s play for them. Imag­ine liv­ing out your own per­sonal fan­tasy in a differ­ent vir­tual world ev­ery day. Imag­ine ex­plor­ing the galaxy at near light speed, with a few backup copies of your mind safe at home on earth in case you run into an ex­plod­ing su­per­nova. Imag­ine a world where re­sources are har­vested so effi­ciently that ev­ery­one’s ba­sic needs are taken care of, and poli­ti­cal and eco­nomic in­cen­tives are so in­tel­li­gently fine-tuned that “world peace” be­comes, for the first time ever, more than a Su­per Bowl half­time show slo­gan.

With self-im­prov­ing AI we may be able to erad­i­cate suffer­ing and death just as we once erad­i­cated smal­l­pox. It is not the limits of na­ture that pre­vent us from do­ing this, but only the limits of our cur­rent un­der­stand­ing. It may sound like a para­dox, but it’s our brains that pre­vent us from fully un­der­stand­ing our brains.

Turf Wars

At this point you might be ask­ing your­self: “Why is this topic in this book? What does any of this have to do with the econ­omy or na­tional se­cu­rity or poli­tics?”

In fact, it has ev­ery­thing to do with all of those is­sues, plus a whole lot more. The in­tel­li­gence ex­plo­sion will bring about change on a scale and scope not seen in the his­tory of the world. If we don’t pre­pare for it, things could get very bad, very fast. But if we do pre­pare for it, the in­tel­li­gence ex­plo­sion could be the best thing that has hap­pened since . . . liter­ally ever.

But be­fore we get to the kind of life-al­ter­ing progress that would come af­ter the Sin­gu­lar­ity, we will first have to deal with a lot of smaller changes, many of which will throw en­tire in­dus­tries and ways of life into tur­moil. Take the mu­sic busi­ness, for ex­am­ple. It was not long ago that stores like Tower Records and Sam Goody were do­ing billions of dol­lars a year in com­pact disc sales; now peo­ple buy mu­sic from home via the In­ter­net. Pub­lish­ing is cur­rently fac­ing a similar up­heaval. News­pa­pers and mag­a­z­ines have strug­gled to keep sub­scribers, book­sel­lers like Borders have been forced into bankruptcy, and cus­tomers are forc­ing pub­lish­ers to switch to ebooks faster than the pub­lish­ers might like.

All of this is to say that some peo­ple are already wit­ness­ing the early stages of up­heaval first­hand. But for ev­ery­one else, there is still a feel­ing that some­thing is differ­ent this time; that all of those years of ed­u­ca­tion and ex­pe­rience might be turned up­side down in an in­stant. They might not be able to iden­tify it ex­actly but they re­al­ize that the world they’ve known for forty, fifty, or sixty years is no longer the same.

There’s a good rea­son for that. We feel it and sense it be­cause it’s true. It’s hap­pen­ing. There’s ab­solutely no ques­tion that the world in 2030 will be a very differ­ent place than the one we live in to­day. But there is a ques­tion, a large one, about whether that place will be bet­ter or worse.

It’s hu­man na­ture to re­sist change. We worry about our fam­i­lies, our ca­reers, and our bank ac­counts. The ex­ec­u­tives in in­dus­tries that are already ex­pe­rienc­ing cat­a­clys­mic shifts would much pre­fer to go back to the way things were ten years ago, when peo­ple still bought mu­sic, mag­a­z­ines, and books in stores. The fu­ture was pre­dictable. Hu­mans like that; it’s part of our na­ture.

But pre­dictabil­ity is no longer an op­tion. The in­tel­li­gence ex­plo­sion, when it comes in earnest, is go­ing to change ev­ery­thing— we can ei­ther be pre­pared for it and take ad­van­tage of it, or we can re­sist it and get run over.

Un­for­tu­nately, there are a good num­ber of peo­ple who are go­ing to re­sist it. Not only those in af­fected in­dus­tries, but those who hold power at all lev­els. They see how tech­nol­ogy is cut­ting out the mid­dle­men, how peo­ple are be­com­ing em­pow­ered, how blog­gers can break na­tional news and YouTube videos can cre­ate su­per­stars.

And they don’t like it.

A Bat­tle for the Future

Power bases in busi­ness and poli­tics that have been forged over decades, if not cen­turies, are be­ing threat­ened with ex­tinc­tion, and they know it. So the own­ers of that power are try­ing to hold on. They think they can do that by drag­ging us back­ward. They think that, by grow­ing the pub­lic’s de­pen­dency on gov­ern­ment, by tak­ing away the en­trepreneurial spirit and re­wards and by limit­ing per­sonal free­doms, they can slow down progress.

But they’re wrong. The in­tel­li­gence ex­plo­sion is com­ing so long as sci­ence it­self con­tinues. Try­ing to put the ge­nie back in the bot­tle by drag­ging us to­ward serf­dom won’t stop it and will, in fact, only leave the world with an econ­omy and so­ciety that are com­pletely un­pre­pared for the amaz­ing things that it could bring.

Robin Han­son, au­thor of “The Eco­nomics of the Sin­gu­lar­ity” and an as­so­ci­ate pro­fes­sor of eco­nomics at Ge­orge Ma­son Univer­sity, wrote that af­ter the Sin­gu­lar­ity, “The world econ­omy, which now dou­bles in 15 years or so, would soon dou­ble in some­where from a week to a month.”

That is un­fath­omable. But even if the rate were much slower, say a dou­bling of the world econ­omy in two years, the shock-waves from that kind of growth would still change ev­ery­thing we’ve come to know and rely on. A ma­chine could offer the ideal farm­ing meth­ods to dou­ble or triple crop pro­duc­tion, but it can’t force a farmer or an in­dus­try to im­ple­ment them. A ma­chine could find the cure for can­cer, but it would be mean­ingless if the phar­ma­ceu­ti­cal in­dus­try or Food and Drug Ad­minis­tra­tion re­fused to al­low it. The ma­chines won’t be the prob­lem; hu­mans will be.

And that’s why I wanted to write about this topic. We are at the fore­front of some­thing great, some­thing that will make the In­dus­trial Revolu­tion look in com­par­i­son like a child dis­cov­er­ing his hands. But we have to be pre­pared. We must be open to the changes that will come, be­cause they will come. Only when we ac­cept that will we be in a po­si­tion to thrive. We can’t al­low poli­ti­ci­ans to blame progress for our prob­lems. We can’t al­low en­trenched bu­reau­crats and power-hun­gry ex­ec­u­tives to in­fluence a fu­ture that they may have no place in.

Many peo­ple are afraid of these changes— of course they are: it’s part of be­ing hu­man to fear the un­known— but we can’t be so en­trenched in the way the world works now that we are un­able to han­dle change out of fear for what those changes might bring.

Change is go­ing to be as much a part of our fu­ture as it has been of our past. Yes, it will hap­pen faster and the changes them­selves will be far more dra­matic, but if we pre­pare for it, the change will mostly be pos­i­tive. But that prepa­ra­tion is the key: we need to be­come more well-rounded as in­di­vi­d­u­als so that we’re able to con­stantly adapt to new ways of do­ing things. In the fu­ture, the way you do your job may change four to five or fifty times over the course of your life. Those who can­not, or will not, adapt will be left be­hind.

At the same time, the Sin­gu­lar­ity will give many more peo­ple the op­por­tu­nity to be suc­cess­ful. Be­cause things will change so rapidly there is a much greater like­li­hood that peo­ple will find some­thing they ex­cel at. But it could also mean that peo­ple’s suc­cesses are much shorter-lived. The days of some­one be­com­ing a leg­end in any one busi­ness (think Clive Davis in mu­sic, Steven Spielberg in movies, or the Hearst fam­ily in pub­lish­ing) are likely over. But those who em­brace and adapt to the com­ing changes, and sur­round them­selves with oth­ers who have done the same, will flour­ish.

When ma­jor com­pa­nies, set in their ways, try to con­vince us that change is bad and that we must stick to the sta­tus quo, no mat­ter how much hu­man in­quisi­tive­ness and in­ge­nu­ity try to pro­pel us for­ward, we must look past them. We must know in our hearts that these changes will come, and that if we wel­come them into our world, we’ll be­come more suc­cess­ful, more free, and more full of light than we could have ever pos­si­bly imag­ined.

Ray Kurzweil once wrote, “The Sin­gu­lar­ity is near.” The only ques­tion will be whether we are ready for it.

The cita­tions for the chap­ter in­clude:

  • Luke Muehlhauser and Anna Sala­mon, “In­tel­li­gence Ex­plo­sion: Ev­i­dence and Im­port”

  • Daniel Dewey, “Learn­ing What to Value”

  • Eliezer Yud­kowsky, “Ar­tifi­cial In­tel­li­gence as a Pos­i­tive and a Nega­tive Fac­tor in Global Risk”

  • Luke Muehlhauser and Louie Helm, “The Sin­gu­lar­ity and Ma­chine Ethics”

  • Luke Muehlhauser, “So You Want to Save the World”

  • Michael Anis­si­mov, “The Benefits of a Suc­cess­ful Sin­gu­lar­ity”