Glenn Beck discusses the Singularity, cites SI researchers

From the final chapter of his new book Cowards, titled “Adapt or Die: The Coming Intelligence Explosion.”

The year is 1678 and you’ve just arrived in England via a time machine. You take out your new iPhone in front of a group of scientists who have gathered to marvel at your arrival.

“Siri,” you say, addressing the phone’s voice-activated artificial intelligence system, “play me some Beethoven.”

Dunh-Dunh-Dunh-Duuunnnhhh! The famous opening notes of Beethoven’s Fifth Symphony, stored in your music library, play loudly.

“Siri, call my mother.”

Your mother’s face appears on the screen, a Hawaiian beach behind her. “Hi, Mom!” you say. “How many fingers am I holding up?”

“Three,” she correctly answers. “Why haven’t you called more—”

“Thanks, Mom! Gotta run!” you interrupt, hanging up.

“Now,” you say. “Watch this.”

Your new friends look at the iPhone expectantly.

“Siri, I need to hide a body.”

Without hesitation, Siri asks: “What kind of place are you looking for? Mines, reservoirs, metal foundries, dumps, or swamps?” (I’m not kidding. If you have an iPhone 4S, try it.)

You respond “Swamps,” and Siri pulls up a satellite map showing you nearby swamps.

The scientists are shocked into silence. What is this thing that plays music, instantly teleports video of someone across the globe, helps you get away with murder, and is small enough to fit into a pocket?

At best, your seventeenth-century friends would worship you as a messenger of God. At worst, you’d be burned at the stake for witchcraft. After all, as science fiction author Arthur C. Clarke once said, “Any sufficiently advanced technology is indistinguishable from magic.”

Now, imagine telling this group that capitalism and representative democracy will take the world by storm, lifting hundreds of millions of people out of poverty. Imagine telling them their descendants will eradicate smallpox and regularly live seventy-five or more years. Imagine telling them that men will walk on the moon, that planes, flying hundreds of miles an hour, will transport people around the world, or that cities will be filled with buildings reaching thousands of feet into the air.

They’d probably escort you to the madhouse.

Unless, that is, one of the people in that group had been a man named Ray Kurzweil.

Kurzweil is an inventor and futurist who has done a better job than most at predicting the future. Dozens of the predictions from his 1990 book The Age of Intelligent Machines came true during the 1990s and 2000s. His follow-up book, The Age of Spiritual Machines, published in 1999, fared even better. Of the 147 predictions that Kurzweil made for 2009, 78 percent turned out to be entirely correct, and another 8 percent were roughly correct. For example, even though every portable computer had a keyboard in 1999, Kurzweil predicted that most portable computers would lack a keyboard by 2009. It turns out he was right: by 2009, most portable computers were MP3 players, smartphones, tablets, portable game machines, and other devices that lacked keyboards.

Kurzweil is most famous for his “law of accelerating returns,” the idea that technological progress is generally “exponential” (like a hockey stick, curving up sharply) rather than “linear” (like a straight line, rising slowly). In nongeek-speak that means that our knowledge is like the compound interest you get on your bank account: it increases exponentially as time goes on because it keeps building on itself. We won’t experience one hundred years of progress in the twenty-first century, but rather twenty thousand years of progress (measured at today’s rate).

Many experts have criticized Kurzweil’s forecasting methods, but a careful and extensive review of technological trends by researchers at the Santa Fe Institute came to the same basic conclusion: technological progress generally tends to be exponential (or even faster than exponential), not linear.

So, what does this mean? In his 2005 book The Singularity Is Near, Kurzweil shares his predictions for the next few decades:

  • In our current decade, Kurzweil expects real-time translation tools and automatic house-cleaning robots to become common.

  • In the 2020s he expects to see the invention of tiny robots that can be injected into our bodies to intelligently find and repair damage and cure infections.

  • By the 2030s he expects “mind uploading” to be possible, meaning that your memories and personality and consciousness could be copied to a machine. You could then make backup copies of yourself, and achieve a kind of technological immortality.

[sidebar]

Age of the Machines?

“We became the dominant species on this planet by being the most intelligent species around. This century we are going to cede that crown to machines. After we do that, it will be them steering history rather than us.”

—Jaan Tallinn, co-creator of Skype and Kazaa

[/​sidebar]

If any of that sounds absurd, remember again how absurd the eradication of smallpox or the iPhone 4S would have seemed to those seventeenth-century scientists. That’s because the human brain is conditioned to believe that the past is a great predictor of the future. While that might work fine in some areas, technology is not one of them. Just because it took decades to put two hundred transistors onto a computer chip doesn’t mean that it will take decades to get to four hundred. In fact, Moore’s Law, which states (roughly) that computing power doubles every two years, shows how technological progress must be thought of in terms of “hockey stick” progress, not “straight line” progress. Moore’s Law has held for more than half a century already (we can currently fit 2.6 billion transistors onto a single chip) and there’s little reason to expect that it won’t continue to.

But the aspect of his book that has the most far-ranging ramifications for us is Kurzweil’s prediction that we will achieve a “technological singularity” in 2045. He defines this term rather vaguely as “a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed.”

Part of what Kurzweil is talking about is based on an older, more precise notion of “technological singularity” called an intelligence explosion. An intelligence explosion is what happens when we create artificial intelligence (AI) that is better than we are at the task of designing artificial intelligences. If the AI we create can improve its own intelligence without waiting for humans to make the next innovation, this will make it even more capable of improving its intelligence, which will . . . well, you get the point. The AI can, with enough improvements, make itself smarter than all of us mere humans put together.

The really exciting part (or the scary part, if your vision of the future is more like the movie The Terminator) is that, once the intelligence explosion happens, we’ll get an AI that is as superior to us at science, politics, invention, and social skills as your computer’s calculator is to you at arithmetic. The problems that have occupied mankind for decades— curing diseases, finding better energy sources, etc.— could, in many cases, be solved in a matter of weeks or months.

Again, this might sound far-fetched, but Ray Kurzweil isn’t the only one who thinks an intelligence explosion could occur sometime this century. Justin Rattner, the chief technology officer at Intel, predicts some kind of Singularity by 2048. Michael Nielsen, co-author of the leading textbook on quantum computation, thinks there’s a decent chance of an intelligence explosion by 2100. Richard Sutton, one of the biggest names in AI, predicts an intelligence explosion near the middle of the century. Leading philosopher David Chalmers is 50 percent confident an intelligence explosion will occur by 2100. Participants at a 2009 conference on AI tended to be 50 percent confident that an intelligence explosion would occur by 2045.

If we can properly prepare for the intelligence explosion and ensure that it goes well for humanity, it could be the best thing that has ever happened on this fragile planet. Consider the difference between humans and chimpanzees, which share 95 percent of their genetic code. A relatively small difference in intelligence gave humans the ability to invent farming, writing, science, democracy, capitalism, birth control, vaccines, space travel, and iPhones— all while chimpanzees kept flinging poo at each other.

[sidebar]

Intelligent Design?

The thought that machines could one day have superhuman abilities should make us nervous. Once the machines are smarter and more capable than we are, we won’t be able to negotiate with them any more than chimpanzees can negotiate with us. What if the machines don’t want the same things we do?

The truth, unfortunately, is that every kind of AI we know how to build today definitely would not want the same things we do. To build an AI that does, we would need a more flexible “decision theory” for AI design and new techniques for making sense of human preferences. I know that sounds kind of nerdy, but AIs are made of math and so math is really important for choosing which results you get from building an AI.

These are the kinds of research problems being tackled by the Singularity Institute in America and the Future of Humanity Institute in Great Britain. Unfortunately, our silly species still spends more money each year on lipstick research than we do on figuring out how to make sure that the most important event of this century (maybe of all human history)— the intelligence explosion— actually goes well for us.

[/​sidebar]

Likewise, self-improving machines could perform scientific experiments and build new technologies much faster and more intelligently than humans can. Curing cancer, finding clean energy, and extending life expectancies would be child’s play for them. Imagine living out your own personal fantasy in a different virtual world every day. Imagine exploring the galaxy at near light speed, with a few backup copies of your mind safe at home on earth in case you run into an exploding supernova. Imagine a world where resources are harvested so efficiently that everyone’s basic needs are taken care of, and political and economic incentives are so intelligently fine-tuned that “world peace” becomes, for the first time ever, more than a Super Bowl halftime show slogan.

With self-improving AI we may be able to eradicate suffering and death just as we once eradicated smallpox. It is not the limits of nature that prevent us from doing this, but only the limits of our current understanding. It may sound like a paradox, but it’s our brains that prevent us from fully understanding our brains.

Turf Wars

At this point you might be asking yourself: “Why is this topic in this book? What does any of this have to do with the economy or national security or politics?”

In fact, it has everything to do with all of those issues, plus a whole lot more. The intelligence explosion will bring about change on a scale and scope not seen in the history of the world. If we don’t prepare for it, things could get very bad, very fast. But if we do prepare for it, the intelligence explosion could be the best thing that has happened since . . . literally ever.

But before we get to the kind of life-altering progress that would come after the Singularity, we will first have to deal with a lot of smaller changes, many of which will throw entire industries and ways of life into turmoil. Take the music business, for example. It was not long ago that stores like Tower Records and Sam Goody were doing billions of dollars a year in compact disc sales; now people buy music from home via the Internet. Publishing is currently facing a similar upheaval. Newspapers and magazines have struggled to keep subscribers, booksellers like Borders have been forced into bankruptcy, and customers are forcing publishers to switch to ebooks faster than the publishers might like.

All of this is to say that some people are already witnessing the early stages of upheaval firsthand. But for everyone else, there is still a feeling that something is different this time; that all of those years of education and experience might be turned upside down in an instant. They might not be able to identify it exactly but they realize that the world they’ve known for forty, fifty, or sixty years is no longer the same.

There’s a good reason for that. We feel it and sense it because it’s true. It’s happening. There’s absolutely no question that the world in 2030 will be a very different place than the one we live in today. But there is a question, a large one, about whether that place will be better or worse.

It’s human nature to resist change. We worry about our families, our careers, and our bank accounts. The executives in industries that are already experiencing cataclysmic shifts would much prefer to go back to the way things were ten years ago, when people still bought music, magazines, and books in stores. The future was predictable. Humans like that; it’s part of our nature.

But predictability is no longer an option. The intelligence explosion, when it comes in earnest, is going to change everything— we can either be prepared for it and take advantage of it, or we can resist it and get run over.

Unfortunately, there are a good number of people who are going to resist it. Not only those in affected industries, but those who hold power at all levels. They see how technology is cutting out the middlemen, how people are becoming empowered, how bloggers can break national news and YouTube videos can create superstars.

And they don’t like it.

A Battle for the Future

Power bases in business and politics that have been forged over decades, if not centuries, are being threatened with extinction, and they know it. So the owners of that power are trying to hold on. They think they can do that by dragging us backward. They think that, by growing the public’s dependency on government, by taking away the entrepreneurial spirit and rewards and by limiting personal freedoms, they can slow down progress.

But they’re wrong. The intelligence explosion is coming so long as science itself continues. Trying to put the genie back in the bottle by dragging us toward serfdom won’t stop it and will, in fact, only leave the world with an economy and society that are completely unprepared for the amazing things that it could bring.

Robin Hanson, author of “The Economics of the Singularity” and an associate professor of economics at George Mason University, wrote that after the Singularity, “The world economy, which now doubles in 15 years or so, would soon double in somewhere from a week to a month.”

That is unfathomable. But even if the rate were much slower, say a doubling of the world economy in two years, the shock-waves from that kind of growth would still change everything we’ve come to know and rely on. A machine could offer the ideal farming methods to double or triple crop production, but it can’t force a farmer or an industry to implement them. A machine could find the cure for cancer, but it would be meaningless if the pharmaceutical industry or Food and Drug Administration refused to allow it. The machines won’t be the problem; humans will be.

And that’s why I wanted to write about this topic. We are at the forefront of something great, something that will make the Industrial Revolution look in comparison like a child discovering his hands. But we have to be prepared. We must be open to the changes that will come, because they will come. Only when we accept that will we be in a position to thrive. We can’t allow politicians to blame progress for our problems. We can’t allow entrenched bureaucrats and power-hungry executives to influence a future that they may have no place in.

Many people are afraid of these changes— of course they are: it’s part of being human to fear the unknown— but we can’t be so entrenched in the way the world works now that we are unable to handle change out of fear for what those changes might bring.

Change is going to be as much a part of our future as it has been of our past. Yes, it will happen faster and the changes themselves will be far more dramatic, but if we prepare for it, the change will mostly be positive. But that preparation is the key: we need to become more well-rounded as individuals so that we’re able to constantly adapt to new ways of doing things. In the future, the way you do your job may change four to five or fifty times over the course of your life. Those who cannot, or will not, adapt will be left behind.

At the same time, the Singularity will give many more people the opportunity to be successful. Because things will change so rapidly there is a much greater likelihood that people will find something they excel at. But it could also mean that people’s successes are much shorter-lived. The days of someone becoming a legend in any one business (think Clive Davis in music, Steven Spielberg in movies, or the Hearst family in publishing) are likely over. But those who embrace and adapt to the coming changes, and surround themselves with others who have done the same, will flourish.

When major companies, set in their ways, try to convince us that change is bad and that we must stick to the status quo, no matter how much human inquisitiveness and ingenuity try to propel us forward, we must look past them. We must know in our hearts that these changes will come, and that if we welcome them into our world, we’ll become more successful, more free, and more full of light than we could have ever possibly imagined.

Ray Kurzweil once wrote, “The Singularity is near.” The only question will be whether we are ready for it.

The citations for the chapter include:

  • Luke Muehlhauser and Anna Salamon, “Intelligence Explosion: Evidence and Import”

  • Daniel Dewey, “Learning What to Value”

  • Eliezer Yudkowsky, “Artificial Intelligence as a Positive and a Negative Factor in Global Risk”

  • Luke Muehlhauser and Louie Helm, “The Singularity and Machine Ethics”

  • Luke Muehlhauser, “So You Want to Save the World”

  • Michael Anissimov, “The Benefits of a Successful Singularity”