Book review: The Infinity Machine

Book review: The Infinity Machine: Demis Hassabis, DeepMind, and the Quest for Superintelligence, by Sebastian Mallaby.

This book is a very good history of DeepMind.

Ender versus the pure scientist

An important theme of the book is that Hassabis identifies as pure scientist, valuing knowledge for its own sake. He wants to surpass Newton and Einstein at understanding reality.

He also identifies powerfully with Ender Wiggin, wanting to play a pivotal role in a critical interaction between two groups of intelligences.

I see some important tension between these two goals. Are they sufficiently similar that he can accomplish them both? Or is he dividing his effort between them in a way that sabotages his chances of success? Mallaby doesn’t answer.

Mallaby documents some downsides to Hassabis’s choice to emphasize science over engineering.

One of the things that Sutskever loved about OpenAI was that it revered engineers … AI has academic roots, and academics tend to look down on the dirty work of engineering. … Hassabis and his colleagues disparaged OpenAI’s work as engineering-led: all brute force and no intelligence.

OpenAI would have failed if it chose a “no intelligence” approach, but underestimating brute force has been a more widespread mistake in AI than underestimating intelligence. See the Bitter Lesson.

DeepMind underestimated scaling laws circa 2019, and fell behind for a while as a result.

Early Funding

DeepMind got adequate funding at its founding in 2010, but gradually developed uncomfortable funding constraints that led to selling out to Google in 2014.

Peter Thiel was comfortable investing in DeepMind in 2010, partly because it was a clearly contrarian stance. But by late 2012, he abandoned further investment because it was too expensive and mainstream. I understand the too expensive part, but too mainstream in 2012 sounds bizarre.

Hassabis was unenthusiastic about joining Google. But he disliked needing to spend time and effort on fundraising. His attitude in resigning himself to cede significant control seems to reflect a decision to prioritize being a scientist over being Ender.

Mallaby documents some later interactions with Google CEO Sundar Pichai which indicate significant tension between Hassabis and Pichai around 2017, with Pichai sounding a bit myopic about the potential of AI. Google’s founders had a better grasp on the future of AI, but they weren’t active enough in the company to offset Pichai’s nearer-term focus.

Facebook offered more money than Google to buy DeepMind. Hassabis talked with Zuckerberg, and rejected him on the grounds that he couldn’t see that AI was more important than virtual reality or 3D printing.

Elon Musk

Musk plays important roles in this story, laden with disturbing contradictions.

He played a possibly important role in prompting Google to buy DeepMind, by bragging to Larry Page about Musk’s investment in DeepMind.

Musk recognized Hassabis’s competence early on. But he switched to disliking Hassabis in 2014 after trying to buy DeepMind and being rejected in favor of Google.

Why did DeepMind reject Musk’s bid? Mallaby doesn’t say much here. Other sources provide conflicting impressions as to which offer promised more autonomy for DeepMind. Hassabis mainly decided on the basis that Google credibly promised to provide enough compute. Without Thiel’s support, Musk’s ability to raise enough money looked questionable. But in hindsight, the examples of OpenAI and Anthropic indicate that it was possible, but hard, to raise enough money as an independent company.

Musk has often expressed a fear of AI being controlled by a big corporation. That’s strange to read now that Musk has merged his AI company into a trillion dollar conglomerate. Was some of Musk’s concern specific to Google? Musk had a strong negative reaction to Larry Page’s position that it was speciesist to worry about AI replacing humanity.

Hassabis hoped to manage the risks of powerful AI by getting the best AI developers on a single team, sometimes likened to a Manhattan Project.

That didn’t survive contact with the egos that AI attracted. From a chapter aptly titled Out of Eden:

But to believers in the singleton vision, OpenAI’s founding represented the Fall: the moment when the serpent brought evil into the garden … Hassabis, ever practical, was also angry in a simpler way. Musk and Hoffman had been invited to the SpaceX gathering [a DeepMind safety board meeting] in good faith. They had sat through the meeting, listened to DeepMind’s plans, and then used what they had heard to double-cross him.

… in early 2014, when Elon Musk tried to buy DeepMind, allegedly to safeguard it for humanity. A year later, Musk remained bitter that his bid had been spurned; if he couldn’t be the one to build AI, he wanted nobody to do so.

Musk … continued to fulminate against DeepMind, denouncing Hassabis as an evil genius, the evidence being that Hassabis had once worked on a computer game called Evil Genius.

An ominous note:

in 2013, Elon Musk’s wife, Talulah Riley—an actress known for playing a seductive TV robot who takes to massacring humans

OpenAI

But Musk’s fracturing of the AI industry wasn’t the only obstacle to cooperative AI development. Maybe the biggest setback came when OpenAI released ChatGPT. AI companies at the time seemed to have a policy of being pretty cautious about releasing new models. OpenAI changed their policy to release ChatGPT rather hastily in response to a false rumor that Anthropic was about to release the similar product that it had created. Mallaby concludes:

Once ChatGPT had been embraced by consumers, the incentives for gradualism crumbled.

Five months later, Hassabis told Mallaby:

This is wartime, OpenAI and Microsoft have literally parked the tanks on the lawn.

Mustafa Suleyman

The book significantly raised my opinion of Suleyman. My review of his book was unenthusiastic.

Apparently he sounds a bit more forceful when talking to CEOs, and consistently focused on some medium to large risks associated with AI.

He was the most eager of DeepMind leaders to pressure Google into agreeing on an ethics and safety board.

Alas, he doesn’t have the political skills to be effective at his desired role.

In my review I dismissed his concern that “a jobs recession will crater tax receipts”. I recently examined this more carefully, and concluded that Suleyman’s concern’s are reasonable, and that there’s likely to be a few months or years of COVID-level stress before government revenues become ample.

My Complaints

Most of the book seems carefully researched. Mallaby understands AI well enough that I don’t have any criticisms there.

Here are some flaws that annoyed me, but which don’t detract much from the book.

… Homo sapiens acquired the capacity for abstract thought, some seventy thousand years ago

That’s misleading at best. There’s significant evidence of earlier abstract thought. It seems likely that abstraction emerged gradually, over a long period.

Mallaby wrote “Oxford’s Future of Life Institute”, when he meant to refer to the Future of Humanity Institute.

In 2019, GPT-2 had barely been able to count up to five; it was impressive in the same way that a four-year-old might be. In 2020, GPT-3 was like a nine-year-old

I like attempts to explain AI progress by estimating the equivalent human ages, but my experience indicates that AI is advancing at more like two years of age per calendar year. Mallaby’s estimates seem like they’re the result of cherry-picking the most impressive responses. I focus somewhat on their planning abilities, whereas I doubt that Mallaby puts any weight on those abilities. I say AI is just now reaching the nine-year-old level.

Mallaby quotes Hinton as saying “There aren’t any examples of more intelligent things being controlled by less intelligent things”.

While I approve of Mallaby’s concerns about this kind of risk, he ought to have pointed out that Hinton exaggerated. Some counter-examples:

  • Toxoplasma gondii controlling mice

  • Zombie Ant Fungus

  • The “pointy-haired boss” trope

  • Cats training humans to feed and pet them

  • The hunger-signaling part of the human mind overruling the part that wants to lose weight

Concluding Thoughts

The book provides good insights into why AI became a race rather than a Manhattan Project.

Mallaby leans toward the conclusion that a race was inevitable. I’m not convinced.

This seems wrong:

The US-China race dynamic made it almost impossible to stanch the intra-US race dynamic.

(I’m unsure whether Mallaby endorsed that view, or was merely reporting Hassabis’s view.) We know from nuclear non-proliferation treaties that cooperation is possible between nations that are more hostile than the current US-China conflict. This quote comes after Mallaby has devoted plenty of analysis that points toward Altman and Musk triggering a race for reasons that seem completely unrelated to China. I remain confused as to why people treat China as anything more than a rationalization for pursuing a competition that they want for less noble reasons.

The book convinced me that Hassabis is making a mistake by trying to be both Einstein and Ender.

Hassabis has achieved partial success at Einstein-level science, while, like Einstein, working at another job.

But being Ender is really a full-time job. It currently looks like Hassabis is not narrowly enough focused on being Ender to be the pivotal person in AI’s destiny to remake the world.

Mallaby hints at Oppenheimer as a role model that might be appropriate for Hassabis. Maybe the world would be better off if Hassabis had aimed more for that role. Mallaby seems to say it’s much too late to try that.

More Quotes

“I think political systems will use it to terrorize people,” Hinton answered.

“Then why are you doing the research?” Bostrom asked.

“I could give you the usual arguments,” Hinton replied. “But the truth is that the prospect of discovery is too sweet.”

According to David Silver:

When Demis talked about the influence of his mother and his horror of manipulating others, he meant it. But it was one thing to abhor the idea of controlling colleagues. Given his Jedi-level charisma, it was quite another to avoid it.

I’m puzzled. I don’t see signs of Jedi-level charisma.

Because of the black-box nature of these networks, the scientists who built them often sounded like surprised parents. Look, my child can say so many more words than just a week ago!

Hassabis was never part of the Singularity crowd. But he shared the assumption that a “singleton” scenario provided the best shot at safe AI … He imagined convening a band of elite scientists in a secluded research center, there to focus single-mindedly on the birthing of safe superintelligence. This mash-up of Ender’s clandestine space station and the Manhattan Project’s secret encampment in New Mexico bubbled up in conversation periodically