This marginal analysis is correct on its own terms, but I think is irrelevant to Jason’s point.
To escape American Federal or Californian regulation, a company would have to stop doing business in the relevant jurisdiction, not just shift marginal investment in new data centers. My understanding of Jason’s point is that the AI companies are willing to pay large costs already to stay where they are. Sure, there might be a tipping point where companies start leaving the Bay, which is a distinct risk of strong Californian regulation. But it seems implausible that any of the relevant companies would stop selling services to Americans.
But even though the companies stay here, the importance of the American companies may decrease relative to international competitors. Also, I think there are things farther up the supply chain that can move overseas. If American cloud companies have big barriers to training their own frontier models, maybe they’ll serve up DeepSeek models instead.
I don’t think it should be a huge concern in the near term, as long as the regulations are well written. But fundamentally, it feeds back into the race dynamic.
I think it’s worth distinguishing between “this AI policy could be slightly inconvenient for America’s overall geopolitical strategy” and “this AI policy is so bad for America’s AI arms race that we’re going to lose a shooting war with China.”
The former is a political problem that advocates need to find a strategy to cope with, but it’s not a reason not to do advocacy—we should be willing to trade away a little bit of American influence in order to avoid a large risk of civilizational collapse from misaligned AI.
If you’re earnestly worried about maximizing American influence, there are much better strategies for making that happen than trying to make sure that we have zero AI regulations. You could repeal the Jones Act, you could have a stable tariff regime, you could fix the visa system, you could fund either the CHIPS Act or a replacement for the CHIPS Act, you could give BIS more funding to go after chip smugglers, and so on.
I think the “concern” about the harmful geopolitical effects of moderate AI regulation is mostly opportunistic political theater by companies who would prefer to remain unregulated—there’s a notable absence of serious international relations scholars or national security experts who are coming out in favor of zero AI safety regulation as a geopolitical tool. At most, some experts might be pushing for easier land use and environmental approvals, which are not in conflict with the regulations that organizations like CAIP are pushing for.
This marginal analysis is correct on its own terms, but I think is irrelevant to Jason’s point.
To escape American Federal or Californian regulation, a company would have to stop doing business in the relevant jurisdiction, not just shift marginal investment in new data centers. My understanding of Jason’s point is that the AI companies are willing to pay large costs already to stay where they are. Sure, there might be a tipping point where companies start leaving the Bay, which is a distinct risk of strong Californian regulation. But it seems implausible that any of the relevant companies would stop selling services to Americans.
But even though the companies stay here, the importance of the American companies may decrease relative to international competitors. Also, I think there are things farther up the supply chain that can move overseas. If American cloud companies have big barriers to training their own frontier models, maybe they’ll serve up DeepSeek models instead.
I don’t think it should be a huge concern in the near term, as long as the regulations are well written. But fundamentally, it feeds back into the race dynamic.
I think it’s worth distinguishing between “this AI policy could be slightly inconvenient for America’s overall geopolitical strategy” and “this AI policy is so bad for America’s AI arms race that we’re going to lose a shooting war with China.”
The former is a political problem that advocates need to find a strategy to cope with, but it’s not a reason not to do advocacy—we should be willing to trade away a little bit of American influence in order to avoid a large risk of civilizational collapse from misaligned AI.
If you’re earnestly worried about maximizing American influence, there are much better strategies for making that happen than trying to make sure that we have zero AI regulations. You could repeal the Jones Act, you could have a stable tariff regime, you could fix the visa system, you could fund either the CHIPS Act or a replacement for the CHIPS Act, you could give BIS more funding to go after chip smugglers, and so on.
I think the “concern” about the harmful geopolitical effects of moderate AI regulation is mostly opportunistic political theater by companies who would prefer to remain unregulated—there’s a notable absence of serious international relations scholars or national security experts who are coming out in favor of zero AI safety regulation as a geopolitical tool. At most, some experts might be pushing for easier land use and environmental approvals, which are not in conflict with the regulations that organizations like CAIP are pushing for.