DeepMind: The Podcast—Excerpts on AGI

DeepMind: The Podcast—Season 2 was released over the last ~1-2 months. The two episodes most relevant to AGI are:

I found a few quotes noteworthy and thought I’d share them here for anyone who didn’t want to listen to the full episodes:

The road to AGI (S2, Ep5)

(Published February 15, 2022)

Shane Legg’s AI Timeline

Shane Legg (4:03):

If you go back 10-12 years ago the whole notion of AGI was lunatic fringe. People [in the field] would literally just roll their eyes and just walk away. [...] [I had that happen] multiple times. I have met quite a few of them since. There have even been cases where some of these people have applied for jobs at DeepMind years later. But yeah, it was a field where you know there were little bits of progress happening here and there, but powerful AGI and rapid progress seemed like it was very, very far away. [...] Every year [the number of people who roll their eyes at the notion of AGI] becomes less.

Hannah Fry (5:02):

For over 20 years, Shane has been quietly making predictions of when he expects to see AGI.

Shane Legg (5:09):

I always felt that somewhere around 2030-ish it was about a 50-50 chance. I still feel that seems reasonable. If you look at the amazing progress in the last 10 years and you imagine in the next 10 years we have something comparable, maybe there’s some chance that we will have an AGI in a decade. And if not in a decade, well I don’t know, say three decades or so.

Hannah Fry (5:33):

And what do you think [AGI] will look like? [Shane answers at length.]

David Silver on it being okay to have AGIs with different goals (??)

Hannah Fry (16:45):

Last year David co-authored a provocatively titled paper called Reward is Enough. He believes reinforcement learning alone could lead all the way to artificial general intelligence.

[...] (21:37)

But not everyone at DeepMind is convinced that reinforcement learning on its own will be enough for AGI. Here’s Raia Hadsell, Director of Robotics.

Raia Hadsell (21:44):

The question I usually have is where do we get that reward from. It’s hard to design rewards and it’s hard to imagine a single reward that’s so all-consuming that it would drive learning everything else.

Hannah Fry (21:59):

I put this question about the difficulty of designing an all-powerful reward to David Silver.

David Silver (22:05):

I actually think this is just slightly off the mark–this question–in the sense that maybe we can put almost any reward into the system and if the environment’s complex enough amazing things will happen just in maximizing that reward. Maybe we don’t have to solve this “What’s the right thing for intelligence to really emerge at the end of it?” kind of question and instead embrace the fact that there are many forms of intelligence, each of which is optimizing for its own target. And it’s okay if we have AIs in the future some of which are trying to control satellites and some of which are trying to sail boats and some of which are trying to win games of chess and they may all come up with their own abilities in order to allow that intelligence to achieve its end as effectively as possible.

[...] (26:14)

But of course this is a hypothesis. I cannot offer any guarantee that reinforcement learning algorithms do exist which are powerful enough to just get all the way there. And yet the fact that if we can do it it would provide a path all the way to AGI should be enough for us to try really really hard.

Promise of AI with Demis Hassabis (Ep9)

(Published March 15, 2022)

Demis Hassabis’ AI Timeline

Dennis Hassabis (6:23):

From what we’ve seen so far [the development of AGI] will probably be more incremental and then a threshold will be crossed. But I suspect it will start feeling interesting and strange in this middle zone as we start approaching that. We’re not there yet. I don’t think [any] of the systems that we interact with or built have that feeling of sentience or awareness, any of those things. They’re just kind of programs that execute, albeit they learn. But I could imagine that one day that could happen, you know, there’s a few things I look out for, like perhaps coming up with a truly original idea, creating something new, a new theory in science that ends up holding, maybe coming up with its own problem that it wants to solve, these kinds of things would be the sort of activities that I’d be looking for on the way to maybe that big day.

Hannah Fry (7:07):

If you’re a betting man, then when do you think that will be?

Demis Hassabis (7:11):

So I think that the progress so far has been pretty phenomenal. I think that [AGI] it’s coming relatively soon in the next you know–I wouldn’t be super surprised–in the next decade or two.

AI needs a value system, sociologists and psychologists needed to help define happiness

Hannah Fry (13:02):

Okay how about a moral compass then? Can you impart a moral compass into AI, and should you?

Demis Hassabis (13:09):

I mean I’m not sure I would call it a moral compass, but definitely it’s going to need a value system because whatever goal you give it you’re effectively incentivizing that AI system to do something. And so as that becomes more and more general you can sort of think about that as almost a value system. What do you want it to do in its set of actions, what you do want to sort of disallow, how should it think about side effects versus its main goal, what’s its top level goal if it’s to keep humans happy, which set of humans, what does happiness mean, we [will] definitely need help from philosophers and sociologists [and psychologists] and others about defining what a lot of these terms mean. And of course a lot of them are very tricky for humans to figure out our collective goals.

Best outcome of AGI

Hannah Fry (13:58):

What do you see as the best possible outcome of having AGI?

Demis Hassabis (14:03):

The outcome I’ve always dreamed of or imagined is AGI has helped us solve a lot of the big challenges facing society today, be that health, cures for diseases like Alzheimer’s. I would also imagine AGI helping with climate creating a new energy source that is renewable and then what would happen after those kinds of first stage things is you kind of have this sometimes people describe it as radical abundance.

Biggest worries

Hannah Fry (16:01):

I think you probably know what I’m going to ask you next because if that is the fully optimistic utopian view of the future it can’t all be positive when you’re lying awake at night. What are the things that you worry about?

Demis Hassabis (16:13):

Well to be honest with you I do think that is a very plausible end state–the optimistic one I painted you. And of course that’s one reason I work on AI is because I hoped it would be like that. On the other hand, one of the biggest worries I have is what humans are going to do with AI technologies on the way to AGI. Like most technologies they could be used for good or bad and I think that’s down to us as a society and governments to decide which direction they’re going to go in.

Society not yet ready for AGI

Hannah Fry (16:42):

Do you think society is ready for AGI?

Demis Hassabis (16:45):

I don’t think, yet. I think that’s part of what this podcast series is about as well is to give the general public a more of an understanding of what AGI is, what AI is, and what’s coming down the road and then we can start grappling with as a society and not just the technologists what we want to be doing with these systems.

‘Avengers assembled’ for AI Safety: Pause AI development to prove things mathematically

Hannah Fry (17:07):

You said you’ve got this sort of 20-year prediction and then simultaneously where society is in terms of understanding and grappling with these ideas. Do you think that DeepMind has a responsibility to hit pause at any point?

Demis Hassabis (17:24):

Potentially. I always imagine that as we got closer to the sort of gray zone that you were talking about earlier, the best thing to do might be to pause the pushing of the performance of these systems so that you can analyze down to minute detail exactly and maybe even prove things mathematically about the system so that you know the limits and otherwise of the systems that you’re building. At that point I think all the world’s greatest minds should probably be thinking about this problem. So that was what I would be advocating to you know the Terence Tao’s of this world, the best mathematicians. Actually I’ve even talked to him about this—I know you’re working on the Riemann hypothesis or something which is the best thing in mathematics but actually this is more pressing. I have this sort of idea of like almost uh ‘Avengers assembled’ of the scientific world because that’s a bit of like my dream.