I agree that competition with China is a plausible reason regulation won’t happen; that will certainly be one of the arguments advanced by industry and NatSec as to why it should not be throttled. However, I’m not sure, and currently don’t think it will, be stronger than the protectionist impulses,. Possibly it will exacerbate the “centralization” of AI dynamic that I listed in the ‘licensing’ bullet point, where large existing players receive money and de-facto license to operate in certain areas and then avoid others (as memeticimagery points out). So for instance we see more military style research, and GooAmBookSoft tacitly agree to not deploy AI that would replace lawyers.
To your point on big tech’s political influence; they have, in some absolute sense, a lot of political power, but relatively they are much weaker in political influence than peer industries. I think they’ve benefitted a lot from the R-D stalemate in DC; I’m positing that this will go around/through this stalemate, and I don’t think they currently have the softpower to stop that.
This seems like an important crux to me, because I don’t think greatly slowing AI in the US would require new federal laws. I think many of the actions I listed could be taken by government agencies who over-interpret their existing mandates given the right political and social climate. For instance, the eviction moratorium during COVID, obviously should have required congressional action, but was done by fiat through an over-interpretation of authority by an executive branch agency.
What they do or do not do seems mostly dictated by that socio-political climate, and by the courts, which means less veto points for industry.