When it comes to global industries that have ludicrously massive national security implications. such as the tech industry, regulation is more likely to function properly if it is good for national security, and regulation is less likely to function properly if it is bad for national security.
This is obviously not the full story, but it’s probably the most critical driving factor in AI regulation; at least, the one with the biggest information undersupply in the Lesswrong community, and possibly AI safety people in general. I’m really glad to see people approaching this problem from a different angle, diving into the details on the ground, instead of just making broad statements about international dynamics.
When it comes to global industries that have ludicrously massive national security implications. such as the tech industry, regulation is more likely to function properly if it is good for national security, and regulation is less likely to function properly if it is bad for national security.
Weakening domestic AI industries is bad for national security. This point has repeatedly been made completely clear to the AI governance people with relevant experience in this area, since 2018 at the latest and probably years before. every year since then, they keep saying that they’re not going to slow down AI via regulations.
This is obviously not the full story, but it’s probably the most critical driving factor in AI regulation; at least, the one with the biggest information undersupply in the Lesswrong community, and possibly AI safety people in general. I’m really glad to see people approaching this problem from a different angle, diving into the details on the ground, instead of just making broad statements about international dynamics.