Protectionism will Slow the Deployment of AI

I believe I’m more optimistic than the average LWer that regulation will be passed that slows down AI capabilities research. The current capabilities of language models threaten the interests of the politically salient professional managerial middle class, which will cause constituencies in the government to pass protectionist, anti-AI policies.

I don’t think they will be well targeted or well designed regulation from an alignment point of view, but they will nevertheless slow capabilities research and deployment and alter the strategic AI development landscape.


  1. ChatGPT was a fire alarm for AI and, contra the original post, people have noticed. The capabilities of ChatGPT, and the accessible interface, mean many people have or will experience it. This is already causing a general reaction among the chattering classes that I’d sum up as awareness that this is cool and yet also terrifying.

  2. The government is a combination of elected politicians and non-elected ‘civil servant types’ (bureaucrats, NGOs, the permanent government)

  3. The government is ineffective at long term strategic thinking that maximizes the general good of the public, no argument there. However, that’s the wrong lens to view things through—from a public choice/​class perspective, the government is quite responsive to certain specific interest groups.

  4. The professional managerial class (PMCs) is a term of analysis/​too-online slur[1] for the ‘class’ of professionals and managers who, in the late 2010s and 2020s America, tend to be distinguished by things like having gone to university, have a generic technocratic liberal bent, work in high status professions (tech, academia, journalism, legal, medicine parts of finance, etc.).

  5. The government is very responsive to PMC interests; the civil service, almost by definition, entirely comes from the PMC class. Democratic and Republican congresspeople are primarily from professional managerial backgrounds. The donor and chattering classes also come from or are of this class.

  6. ChatGPT is, rightfully, very scary to PMCs. Headlines about replacing writers, serving as medical counsel, lawyers; right now, these are easily tuned out because there are always these stories (there were tons of op eds seven years ago talking about political truck drivers being replaced).

    1. However, I think this will be different: ChatGPT gives that experience, of being replaced, directly to many people, who can try it at home. It will be a lot easier to deploy versions of this into applications that end up causing some form of job loss to automation, in visible ways. And PMCs have much greater direct contact with political operators.

  7. I don’t think this will fall into the culture war swamp where nothing happens. There are many actions that can be taken, outside of the classic red-blue dynamic, by congress or, importantly, by government agencies, that can slow down AI capability developments to benefit favored classes.

    1. Ban its use in certain professions (“How can you trust an AI to give medical advice?”)

    2. More occupational licensing

    3. Protect the children by slowing tech (“you can generate what with image models??”)

    4. Put liability on the manufacturers of the models

    5. Requiring licensed development of AI—general anti-competitive bills that benefit established large companies that agree to defacto limit certain use cases.

An admittedly vague scenario [2]that I see as likely is a type of PMC work is automated, this causes a lot of distress, and then pressure in various ways from political actors starts being ratched up, maybe not directly attributable to job loss but for all the various sins (ex the four horseman of internet apocalypse).

Again, it won’t be ‘good regulation’, but things that in the aggregate will slow deployment and likely push things into a more legible ecosystem model, primarily to respond to concerns from the professional managerial class.

  1. ^

    I considered using a different word because it’s rarely used outside of tracts criticizing that group. However, alternatives didn’t seem to really fit - ‘text manipulation professions’ seemed almost more insulting.

  2. ^

    Suggestions for operationalized questions for forecasting are appreciated.