The Coming Wave

Link post

Book review: The Coming Wave: Technology, Power, and the Twenty-first Century’s Greatest Dilemma, by Mustafa Suleyman.

An author with substantial AI expertise has attempted to discuss AI in terms that the average book reader can understand.

The key message: AI is about to become possibly the most important event in human history. Maybe 2% of readers will change their minds as a result of reading the book.

A large fraction of readers will come in expecting the book to be mostly hype. They won’t look closely enough to see why Suleyman is excited.

Danger

How much danger does Suleyman see?

Mostly he’s vague about what failure looks like.

AI, synthetic biology, and other advanced forms of technology … could present an existential threat to nation-states—risks so profound they might disrupt or even overturn the current geopolitical order.

Much later, he says we could even face “an existential risk to the species.”

However, these isolated quotes don’t reflect the tone of the book, which seems calmer than those quotes led me to expect.

Does that tone reflect an assumption that most readers wouldn’t believe a more frightened tone? Or that he’s resigned to the inevitability of the human race taking a big gamble? Most likely it’s some combination of the two.

He claims that advances in AI technology are impossible to stop. Three independent sets of incentives are pushing it forward: the profit motive, the need for nations to protect themselves from other nations, and competition for prestige. Wise governments would be our only hope of stopping the effects of those forces. Suleyman has mostly given up hope that governments are that competent.

The book’s confusing impressions are best summarized by this quote:

And yet, while there is compelling evidence that containment is not possible, temperamentally I remain an optimist.

Would believing him help the average reader?

I suppose we should presume unless proven otherwise that people are better off understanding more.

Yet there’s depressingly little that the book’s target audience can do about AI.

Containment

What does Suleyman mean by containment? I can only guess, based on how he advises trying to achieve it.

He says we should avoid recursive self-improvement and autonomy. But he devotes just one sentence to that. Does he think such rules are no more important than a prohibition on using AI for electioneering? Maybe.

What does he think about the difficulty of enforcing those rules? I see plenty about why containment in general is hard, but little about whether some rules are more enforceable than others.

Distinguishing AI Prospects from Hype

Suleyman’s warnings look hard for an average person to distinguish from the usual hype that surrounds new technologies.

I don’t know what he could have done better to convince the average reader that this time is different.

I made a few attempts last year to persuade superforecasters that AI would produce transformative changes in a decade or so. I mostly failed. The unusual nature of my claims about AI advances caused many people to demand a higher burden of proof than could be met with a few hours worth of effort.

The situation feels like how it must have felt in February 2020 to those who saw that COVID wasn’t going to be contained. That time, I was one of the people who was tired of reading about possible SARS, Ebola, swine flu, etc. pandemics. What could have convinced me to more carefully investigate how COVID was spreading? The most promising approach that comes to mind is people offering to bet me. But there’s limits to how many bet offers I can pay attention to, so even a culture of betting might have been inadequate. With AI, it’s somewhat tricky to figure out what to bet on.

I wish Suleyman had found some way to show a graph of accelerating generality in AIs.

A decade ago, it was hard to see how a program designed to play Go could also look better than a total idiot at chess.

Nowadays, systems are sufficiently general purpose that a good deal of the development effort goes into suppressing unwanted abilities. I’ll guess that about a third of the human effort devoted to training GPT-4 went to tasks such as persuading it not to generate porn or racist jokes. That feels like a significant change, which distinguishes it from most new technologies.

If we could get a good graph of AI progress, that would go a long way toward clarifying disagreements over whether AI will become powerful soon.

If I’d been writing this book, I’d have shown some sort of graph of generality versus time, with an implication that AI will reach human levels in the early 2030s. The numbers that I would use would be somewhat arbitrary and hard to defend in detail. But the overall pattern is hard to deny. That pattern reflects the most important aspects of my reasoning about AI timelines.

The book makes many forecasts about a variety of technologies. I encourage readers to pay careful attention to which ones have clear dates associated with them, and take those forecasts more seriously than the forecasts with vague timelines.

His most interesting forecast involves what he labels the Modern Turing Test. It tests whether the AI fulfill this request:

Go make $1 million on Amazon in a few months with just a $100,000 investment. … [caveats about bank accounts] … I think it will be done with a few minor human interventions within the next year, and probably fully autonomously within three to five years.

This seems like his second most important forecast:

Pause for a moment and imagine a world where robots with the dexterity of human beings that can be “programmed” in plain English are available at the price of a microwave. … utterly inevitable over a twenty-year horizon, and possibly much sooner.

Comparing AI to a Prior Wave of Futurism

The middle part of the book is a survey of other possibly imminent technological advances. It’s somewhat reminiscent of Drexler’s 1986 book The Engines of Creation.

Suleyman’s forecasts predict somewhat more dramatic changes than Drexler predicted back in 1986.

How does AI today compare to how nanotech looked back then? Talk about AI has spread a bit wider than talk about nanotech ever did, but that might be due to more people understanding software than chemistry.

The difference that strikes me the most is the reaction of venture capital. 15 years after Drexler predicted that nanotech would become important in 15 to 30 years, a few VCs started talking to nanotech companies about maybe investing a few million over a few years. The dot com crash convinced them to be more cautious. Nanotech researchers mostly continued to focus on academic publications. Most nanotech funding came from government grants, and went to chemistry and physics projects that were renamed as nanotech in order to sound more cutting edge.

Compare that to 2022. VCs threw billions at AI startups, some of them months-old companies whose founders had limited track records. At least one of those startups is too committed to secrecy to be confused with a project that’s focused on academic prestige. Academia and government grants seem only marginally relevant to recent AI progress.

And of course there’s OpenAI. Clearly motivated in part by prestige, but more by prestige among trendy startups than prestige among academia.

Digressions

Parts of the book wander far from Suleyman’s areas of expertise.

One example:

a jobs recession will crater tax receipts, damaging public services and calling into question welfare programs just as they are most needed.

Our experience at the start of the COVID pandemic suggests that many governments are able to handle a sudden jobs recession, at least for a year. I see a good chance that capital gains from leading AI company stocks will offset the decline in taxes on wages. A massive AI workforce will create plenty of wealth. It’s not obvious why governments would have difficulty grabbing enough of that wealth to support increasing welfare payments.

There will be important political problems, but they’ll likely be quite different from what Suleyman worries about.

The Book’s Recommendations

I’ll comment on an arbitrary sample of the book’s recommendations for containing AI.

Suleyman wants an Apollo Program on technical safety. This might be the right way to think about some subsets of AI safety research, where there’s something resembling a consensus as to what we want (e.g. interpretability).

But having one big centrally organized project is risky, because there are still some potentially critical areas where we don’t know who is even asking the right questions. E.g. there’s little agreement over how to tell an AI what goals to follow. The leading strategy, RLHF, appears to be adequate only in worlds where AI alignment is fairly easy. Having one big project take charge of this could discourage the pursuit of novel approaches that might be safer.

Suleyman recommends limits on copying AI weights, using computer security techniques. I see some resemblance to how telomeres arguably limit cancer. I don’t quite see how that can be implemented without dramatic restrictions on where those weights can be used. In particular, how can we reliably ensure that an AI can’t copy it’s own weights?

He wants a culture of openness around failures that’s similar to what the world has achieved for plane crashes. That reflects a somewhat optimistic guess about how gradually failures will become more serious.

Suleyman says critics should work on advancing AI capabilities. Suleyman believes that’s the most likely way someone can influence the course of AI. Alas, that relies on controversial assumptions that Suleyman doesn’t adequately explain. If he does have explanations, they would likely be hard to fit into a laymen-oriented book.

Conclusion

I rate this book as four stars, mainly for being a serious attempt by a reputable authority to convince laymen that big changes are imminent.

His advice is vaguely good, but likely too ordinary to address the magnitude of our coming challenges.

Nearly all the value lies in the first third or the first half of the book. Feel free to quit partway through.