LIKE ALTMAN AND DARIO AMODEI, Hassabis refused to join Bengio in signing the pause letter. Indeed, he objected to it fiercely.
”I didn’t sign because a six-month moratorium doesn’t help,” Hassabis told me.
”Who would have stopped development? Just people who signed? Well, that’s no use because you need the whole world to pause, including China. Who would have monitored it?
”I mean, a pause could actually have made things worse.
”Imagine we had a ten-year moratorium, OK? That would slow down the advance of AI, but everything else would carry on as normal. So, you develop better and better chips, data centers, all that. Then we exit the moratorium and the proverbial programming prodigy in his parents’ garage now has a home computer with the power of a data center!
”We’re supposed to be advancing safety. How is that going to do it? The race condition would be insane at that point!
”I mean, it’s insane right now, but maybe there’s some hope because there are only a few leading actors, and we all know each other.
”After a moratorium, you’d be beholden to random actors.”
Hassabis had a point. A pause by itself would not achieve much.[17] Indeed, in a roundabout endorsement of Hassabis’s argument, the extreme doomster Eliezer Yudkowsky also refused to sign the letter. The way Yudkowsky saw things, the only way to save humanity was for governments to ban frontier development outright, by closing down computer servers. If some countries refused to join the ban, others should be “willing to destroy a rogue datacenter by airstrike,” he asserted.[18] With a p(doom) approaching 100, Yudkowsky thought any measures could be justified. It would be worth risking nuclear war to avert the even greater calamity of rogue superintelligence, he insisted. The costs of an infinity machine could be infinite.
Two months after the pause controversy, at the end of May 2023, the safety debate inched forward. Bengio, Hinton, and Hassabis, together with the leaders of the other major labs, signed a one-sentence statement: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.” Some 350 notables added their names to the letter. Only Meta and the open-weight partisans were absent from the list of signatories.[19]
”I thought long and hard about signing that one,” Hassabis told me. “I would’ve liked an extra sentence acknowledging the upsides—‘We believe the potential of AI is going to be amazing,’ or whatever.
”But I signed because it was important for credible people to oppose the idea that there’s no risk at all.
”The point was to say that there really is a risk of catastrophe. We have no idea what the percentage chance is. We have no idea of the timescale. But it’s nonzero. And it’s going to be really hard to sort out, and it could be really serious if it does happen.
”We wouldn’t have needed to do this if there hadn’t been people like Yann LeCun saying, ‘Oh, there’s nothing to see here.’ Which I think is pretty crazy given the uncertainties.
”He says, ‘I’m sure there’s a safe way to build AI.’ And I agree. It might turn out that as we develop these systems further, it’s way easier to keep control of them than we expected.
”Then he says, ‘Therefore, we will build it in that safe way.’ And that’s where I don’t understand his argument.
”First, we don’t yet know what that safe way is.
”Second, what’s to stop half the world building it the wrong way, even if Yann was somehow to build it correctly?
”It’s like with the open-source debate. What’s to stop bad actors getting hold of the model and then repurposing it for bad ends? What’s the answer to that? There isn’t one.
”And it’s not just Yann. There are all these other people in the Valley.
”I mean, not long ago they were talking about crypto. People who go on about crypto one year and pivot to AI the next obviously are not deep into what’s really happening.
”We’re in a situation with a very high degree of uncertainty, with very high stakes. The honest position is that we don’t know how dangerous this stuff is.
”I suspect the risk is significant, but I think it’s going to go OK as long as we have the time to do it properly. So I call myself a cautious optimist.
”And I make that judgment because I’ve lived with AI for decades now. I’ve thought about it; I’ve felt it.
”But some people have no idea. They just see it as another crypto moneymaking scheme with a bit extra.
”I feel like we should be at a moment of reverence and respect for this momentous technology that we’re ushering into the world, and I sometimes feel it’s sullied. It’s like a gold rush. It’s kind of vulgar.
”And so, going back to the letter, I think it did what we wanted. We made it clear that AI safety should be in scope to debate. After that letter, if someone said, ‘Oh, Yann thinks we don’t need a safety debate,’ the retort would be, ‘Well, look, Hinton and Bengio and me and Dario and all these other serious people think it’s worth talking about.’
”And we need that retort if we are going to have a conversation.
”A conversation with everyone, including with governments.”
From Chapter 18: