Dario Amodei and Demis Hassabis statements on international coordination (source):
Interviewer: The personal decisions you make are going to shape this technology. Do you ever worry about ending up like Robert Oppenheimer?
Demis: Look, I worry about those kinds of scenarios all the time. That’s why I don’t sleep very much. There’s a huge amount of responsibility on the people, probably too much, on the people leading this technology. That’s why us and others are advocating for, we’d probably need institutions to be built to help govern some of this. I talked about CERN, I think we need an equivalent of an IAEA atomic agency to monitor sensible projects and those that are more risk-taking. I think we need to think about, society needs to think about, what kind of governing bodies are needed. Ideally it would be something like the UN, but given the geopolitical complexities, that doesn’t seem very possible. I worry about that all the time and we just try to do, at least on our side, everything we can in the vicinity and influence that we have.
Dario: My thoughts exactly echo Demis. My feeling is that almost every decision that I make feels like it’s kind of balanced on the edge of a knife. If we don’t build fast enough, then the authoritarian countries could win. If we build too fast, then the kinds of risks that Demis is talking about and that we’ve written about a lot could prevail. Either way, I’ll feel that it was my fault that we didn’t make exactly the right decision. I also agree with Demis that this idea of governance structures outside ourselves. I think these kinds of decisions are too big for any one person. We’re still struggling with this, as you alluded to, not everyone in the world has the same perspective, and some countries in a way are adversarial on this technology, but even within all those constraints we somehow have to find a way to build a more robust governance structure that doesn’t put this in the hands of just a few people.
Interviewer: [...] Is it actually possible [...]?
Demis: [...] Some sort of international dialogue is going to be needed. These fears are sometimes written off by others as luddite thinking or deceleration, but I’ve never heard a situation in the past where the people leading the field are also expressing caution. We’re dealing with something unbelievably transformative, incredibly powerful, that we’ve not seen before. It’s not just another technology. You can hear from a lot of the speeches at this summit, still people are regarding this as a very important technology, but still another technology. It’s different in category. I don’t think everyone’s fully understood that.
Interviewer: [...] Do you think we can avoid there having to be some kind of a disaster? [...] What should give us all hope that we will actually get together and create this until something happens that demands it?
Dario: If everyone wakes up one day and they learn that some terrible disaster has happened that’s killed a bunch of people or caused an enormous security incident, that would be one way to do it. Obviously, that’s not what we want to happen. [...] every time we have a new model, we test it, we show it to the national security people [...]
It’s nice to hear that there are at least a few sane people leading these scaling labs. It’s not clear to me that their intentions are going to translate into much because there’s only so much you can do to wisely guide development of this technology within an arms race.
But we could have gotten a lot less lucky with some of the people in charge of this technology.
A more cynical perspective is that much of this arms race, especially the international one against China (quote from above: “If we don’t build fast enough, then the authoritarian countries could win.”), is entirely manufactured by the US AI labs.
Actions speak louder than words, and their actions are far less sane than these words.
For example, if Demis regularly lies awake at night worrying about how the thing he’s building could kill everyone, why is he still putting so much more effort into building it than into making it safe?
Probably because he thinks there’s a lower chance of it killing everyone if he makes it. And that if it doesn’t kill everyone then he’ll do a better job managing it than the other lab heads.
This is the belief of basically everyone running a major AGI lab. Obviously all but one of them must be mistaken, but it’s natural that they would all share the same delusion.
This is the belief of basically everyone running a major AGI lab. Obviously all but one of them must be mistaken, but it’s natural that they would all share the same delusion.
I agree with this description and I don’t think this is sane behavior.
Only the 2nd belief is necessarily wrong for everyone but 1 person, the first belief is not necessarily mistaken, because one of them relates to relative performance, and the other relates to absolute performance.
If we don’t build fast enough, then the authoritarian countries could win.
Ideally it would be something like the UN, but given the geopolitical complexities, that doesn’t seem very possible.
This sounds like a rejection of international coordination.
But there was coordination between the United States and the USSR on nuclear weapons issues, despite geopolitical tensions, for example. You can interact with countries you don’t like without trying to destroy the world faster than them!
If we don’t build fast enough, then the authoritarian countries could win.
If you build AI for the US, you’re advancing the capabilities of an authoritarian country at this point. I think people who are worried about authoritarian regimes getting access to AGI should seriously reconsider whether advancing US leadership in AI is the right thing to do. After the new Executive Order, Trump seems to have sole interpretation of law, and there are indications that the current admin won’t follow court rulings. I think it’s quite likely that the US won’t be a democracy soon, and this argument about advancing AI in democracies doesn’t apply to the US anymore.
Yes I’ve heard about it (I’m based in Switzerland myself!) I don’t think it changes the situation that much though, since OpenAI, Anthropic, and Google are still mostly American-owned companies
Dario Amodei and Demis Hassabis statements on international coordination (source):
Interviewer: The personal decisions you make are going to shape this technology. Do you ever worry about ending up like Robert Oppenheimer?
Demis: Look, I worry about those kinds of scenarios all the time. That’s why I don’t sleep very much. There’s a huge amount of responsibility on the people, probably too much, on the people leading this technology. That’s why us and others are advocating for, we’d probably need institutions to be built to help govern some of this. I talked about CERN, I think we need an equivalent of an IAEA atomic agency to monitor sensible projects and those that are more risk-taking. I think we need to think about, society needs to think about, what kind of governing bodies are needed. Ideally it would be something like the UN, but given the geopolitical complexities, that doesn’t seem very possible. I worry about that all the time and we just try to do, at least on our side, everything we can in the vicinity and influence that we have.
Dario: My thoughts exactly echo Demis. My feeling is that almost every decision that I make feels like it’s kind of balanced on the edge of a knife. If we don’t build fast enough, then the authoritarian countries could win. If we build too fast, then the kinds of risks that Demis is talking about and that we’ve written about a lot could prevail. Either way, I’ll feel that it was my fault that we didn’t make exactly the right decision. I also agree with Demis that this idea of governance structures outside ourselves. I think these kinds of decisions are too big for any one person. We’re still struggling with this, as you alluded to, not everyone in the world has the same perspective, and some countries in a way are adversarial on this technology, but even within all those constraints we somehow have to find a way to build a more robust governance structure that doesn’t put this in the hands of just a few people.
Interviewer: [...] Is it actually possible [...]?
Demis: [...] Some sort of international dialogue is going to be needed. These fears are sometimes written off by others as luddite thinking or deceleration, but I’ve never heard a situation in the past where the people leading the field are also expressing caution. We’re dealing with something unbelievably transformative, incredibly powerful, that we’ve not seen before. It’s not just another technology. You can hear from a lot of the speeches at this summit, still people are regarding this as a very important technology, but still another technology. It’s different in category. I don’t think everyone’s fully understood that.
Interviewer: [...] Do you think we can avoid there having to be some kind of a disaster? [...] What should give us all hope that we will actually get together and create this until something happens that demands it?
Dario: If everyone wakes up one day and they learn that some terrible disaster has happened that’s killed a bunch of people or caused an enormous security incident, that would be one way to do it. Obviously, that’s not what we want to happen. [...] every time we have a new model, we test it, we show it to the national security people [...]
It’s nice to hear that there are at least a few sane people leading these scaling labs. It’s not clear to me that their intentions are going to translate into much because there’s only so much you can do to wisely guide development of this technology within an arms race.
But we could have gotten a lot less lucky with some of the people in charge of this technology.
A more cynical perspective is that much of this arms race, especially the international one against China (quote from above: “If we don’t build fast enough, then the authoritarian countries could win.”), is entirely manufactured by the US AI labs.
Actions speak louder than words, and their actions are far less sane than these words.
For example, if Demis regularly lies awake at night worrying about how the thing he’s building could kill everyone, why is he still putting so much more effort into building it than into making it safe?
Probably because he thinks there’s a lower chance of it killing everyone if he makes it. And that if it doesn’t kill everyone then he’ll do a better job managing it than the other lab heads.
This is the belief of basically everyone running a major AGI lab. Obviously all but one of them must be mistaken, but it’s natural that they would all share the same delusion.
I agree with this description and I don’t think this is sane behavior.
Only the 2nd belief is necessarily wrong for everyone but 1 person, the first belief is not necessarily mistaken, because one of them relates to relative performance, and the other relates to absolute performance.
This sounds like a rejection of international coordination.
But there was coordination between the United States and the USSR on nuclear weapons issues, despite geopolitical tensions, for example. You can interact with countries you don’t like without trying to destroy the world faster than them!
I found those quotes useful, thanks!
If you build AI for the US, you’re advancing the capabilities of an authoritarian country at this point.
I think people who are worried about authoritarian regimes getting access to AGI should seriously reconsider whether advancing US leadership in AI is the right thing to do. After the new Executive Order, Trump seems to have sole interpretation of law, and there are indications that the current admin won’t follow court rulings. I think it’s quite likely that the US won’t be a democracy soon, and this argument about advancing AI in democracies doesn’t apply to the US anymore.
Have you noticed that AI companies have been opening offices in Switzerland recently? I’m excited about it.
Yes I’ve heard about it (I’m based in Switzerland myself!)
I don’t think it changes the situation that much though, since OpenAI, Anthropic, and Google are still mostly American-owned companies