I think it’s worth distinguishing between “this AI policy could be slightly inconvenient for America’s overall geopolitical strategy” and “this AI policy is so bad for America’s AI arms race that we’re going to lose a shooting war with China.”
The former is a political problem that advocates need to find a strategy to cope with, but it’s not a reason not to do advocacy—we should be willing to trade away a little bit of American influence in order to avoid a large risk of civilizational collapse from misaligned AI.
If you’re earnestly worried about maximizing American influence, there are much better strategies for making that happen than trying to make sure that we have zero AI regulations. You could repeal the Jones Act, you could have a stable tariff regime, you could fix the visa system, you could fund either the CHIPS Act or a replacement for the CHIPS Act, you could give BIS more funding to go after chip smugglers, and so on.
I think the “concern” about the harmful geopolitical effects of moderate AI regulation is mostly opportunistic political theater by companies who would prefer to remain unregulated—there’s a notable absence of serious international relations scholars or national security experts who are coming out in favor of zero AI safety regulation as a geopolitical tool. At most, some experts might be pushing for easier land use and environmental approvals, which are not in conflict with the regulations that organizations like CAIP are pushing for.
I think it’s worth distinguishing between “this AI policy could be slightly inconvenient for America’s overall geopolitical strategy” and “this AI policy is so bad for America’s AI arms race that we’re going to lose a shooting war with China.”
The former is a political problem that advocates need to find a strategy to cope with, but it’s not a reason not to do advocacy—we should be willing to trade away a little bit of American influence in order to avoid a large risk of civilizational collapse from misaligned AI.
If you’re earnestly worried about maximizing American influence, there are much better strategies for making that happen than trying to make sure that we have zero AI regulations. You could repeal the Jones Act, you could have a stable tariff regime, you could fix the visa system, you could fund either the CHIPS Act or a replacement for the CHIPS Act, you could give BIS more funding to go after chip smugglers, and so on.
I think the “concern” about the harmful geopolitical effects of moderate AI regulation is mostly opportunistic political theater by companies who would prefer to remain unregulated—there’s a notable absence of serious international relations scholars or national security experts who are coming out in favor of zero AI safety regulation as a geopolitical tool. At most, some experts might be pushing for easier land use and environmental approvals, which are not in conflict with the regulations that organizations like CAIP are pushing for.