The AI economic / arms race is an existential threat to humanity, because it incentivizes rushing to develop more capable AI systems while cutting corners on safety. Stopping it would likely require a treaty that restricts the development of AI to a safe pace. The most important parties to this treaty would be the US and China, since they are at the frontier of AI development and are the most deeply invested in winning the race.
I would like to do what I can to end the AI arms race, but I live in Canada. For residents of third nations like me, I still I think it’s important to lobby our representatives to prioritize AI safety and educate our peers about the risks of AI. But ending the AI arms race mainly comes down to what the US and China decide to do, and it’s not so clear how my actions can influence that. I therefore ask this question to solicit ideas on what those of us in third nations, particularly Canada, can do about the AI arms race. I’ll propose a couple of ideas first to hopefully spark some discussion.
One way to approach this problem is to ask the related question: what could an international coalition do that would slow down or stop the AI arms race, even if the US and China were not signatories? If we had a good answer to this question, then AI safety movements in third nations could advocate for the formation of such a coalition. It would give us a strategy that doesn’t critically hinge on the participation of any one particular nation. Here are two ideas about what this coalition could do:
Idea 1: The coalition could require member states to agree to a preemptive ban on the use of AI models that are more powerful than some threshold. This threshold could be set just slightly beyond the limit of frontier models at the time the ban is passed, so it wouldn’t restrict the use of any existing AI models. However, this ban would discourage investment into larger models, because applications built on larger models wouldn’t have a market in any of the coalition member states.
Idea 2: The coalition could create something like the “GUARD” institution proposed in A Narrow Path. GUARD would “pool resources, expertise, and knowledge in a grand-scale collaborative AI safety research effort, with the primary goal of minimizing catastrophic and extinction AI risk,” and would be governed by an “International AI Safety Commission” to ensure that safety is prioritized. Once this is established, we could appeal to top researchers to do responsible safety research at GUARD instead of irresponsibly contributing to the AI arms race. Rather than competing for economic and military superiority, we would be competing for the moral high ground and prestige, and using that to divert talent from a dangerous AI arms race. This won’t stop the arms race of course: there will always be mediocre researchers who will take whatever work they can find. But the top researchers, who can work wherever they want, I think would usually prefer to do good over evil if they can.
Please let me know your thoughts on these ideas, or any ideas you might have about what third nations can do to help stop the AI arms race.
Unfortunately, I think most active capability researchers—and especially the top ones—think they are doing good already and wouldn’t want to do something else.
Yes, but it’s not all about the way things are right now. It’s about the way things could be, and how we can get there. I think we can agree that, even though capability researchers are not doing good, they do care about doing something good, or at least something that can be rationalized as “good” and perceived as good by others. Which means that, if the culture shifts so that those activities are no longer seen as good, and the rationalizations are seen for what they are, they may well change their activities. Or at least the next generation of researchers who haven’t yet locked in to a particular worldview and career path may not continue those activities.
Michael Kratsios has said recently, “We totally reject all efforts by international bodies to assert centralized control and global governance of AI.” What if the US government doesn’t budge on this commitment? This is a plan B: shift the culture among academics so that frontier capabilities research in the private sector is widely frowned upon and the best people want to avoid the well-earned stigma associated with it. Sublimate the competition for capabilities into a competition for righteousness.
[Question] What can Canadians do to help end the AI arms race?
The AI economic / arms race is an existential threat to humanity, because it incentivizes rushing to develop more capable AI systems while cutting corners on safety. Stopping it would likely require a treaty that restricts the development of AI to a safe pace. The most important parties to this treaty would be the US and China, since they are at the frontier of AI development and are the most deeply invested in winning the race.
I would like to do what I can to end the AI arms race, but I live in Canada. For residents of third nations like me, I still I think it’s important to lobby our representatives to prioritize AI safety and educate our peers about the risks of AI. But ending the AI arms race mainly comes down to what the US and China decide to do, and it’s not so clear how my actions can influence that. I therefore ask this question to solicit ideas on what those of us in third nations, particularly Canada, can do about the AI arms race. I’ll propose a couple of ideas first to hopefully spark some discussion.
One way to approach this problem is to ask the related question: what could an international coalition do that would slow down or stop the AI arms race, even if the US and China were not signatories? If we had a good answer to this question, then AI safety movements in third nations could advocate for the formation of such a coalition. It would give us a strategy that doesn’t critically hinge on the participation of any one particular nation. Here are two ideas about what this coalition could do:
Idea 1: The coalition could require member states to agree to a preemptive ban on the use of AI models that are more powerful than some threshold. This threshold could be set just slightly beyond the limit of frontier models at the time the ban is passed, so it wouldn’t restrict the use of any existing AI models. However, this ban would discourage investment into larger models, because applications built on larger models wouldn’t have a market in any of the coalition member states.
Idea 2: The coalition could create something like the “GUARD” institution proposed in A Narrow Path. GUARD would “pool resources, expertise, and knowledge in a grand-scale collaborative AI safety research effort, with the primary goal of minimizing catastrophic and extinction AI risk,” and would be governed by an “International AI Safety Commission” to ensure that safety is prioritized. Once this is established, we could appeal to top researchers to do responsible safety research at GUARD instead of irresponsibly contributing to the AI arms race. Rather than competing for economic and military superiority, we would be competing for the moral high ground and prestige, and using that to divert talent from a dangerous AI arms race. This won’t stop the arms race of course: there will always be mediocre researchers who will take whatever work they can find. But the top researchers, who can work wherever they want, I think would usually prefer to do good over evil if they can.
Please let me know your thoughts on these ideas, or any ideas you might have about what third nations can do to help stop the AI arms race.
Unfortunately, I think most active capability researchers—and especially the top ones—think they are doing good already and wouldn’t want to do something else.
Yes, but it’s not all about the way things are right now. It’s about the way things could be, and how we can get there. I think we can agree that, even though capability researchers are not doing good, they do care about doing something good, or at least something that can be rationalized as “good” and perceived as good by others. Which means that, if the culture shifts so that those activities are no longer seen as good, and the rationalizations are seen for what they are, they may well change their activities. Or at least the next generation of researchers who haven’t yet locked in to a particular worldview and career path may not continue those activities.
Michael Kratsios has said recently, “We totally reject all efforts by international bodies to assert centralized control and global governance of AI.” What if the US government doesn’t budge on this commitment? This is a plan B: shift the culture among academics so that frontier capabilities research in the private sector is widely frowned upon and the best people want to avoid the well-earned stigma associated with it. Sublimate the competition for capabilities into a competition for righteousness.