[Question] Snapshot of narratives and frames against regulating AI

This is a speculative map of a hot discussion topic. I’m posting it in question form in the hope we can rapidly map the space in answers.

Looking at various claims at X and at the AI summit, it seems possible to identify some key counter-regulation narratives and frames that various actors are pushing.

Because a lot of the public policy debate won’t be about “what are some sensible things to do” within a particular frame, but rather about fights for frame control, or “what frame to think in”, it seems beneficial to have at least some sketch of a map of the discourse.

I’m posting this as a question with the hope we can rapidly map the space, and one example of a “local map”:

“It’s about open source vs. regulatory capture”

It seems the coalition against AI safety, most visibly represented by Yann LeCun and Meta, has identified “it’s about open source vs. big tech” as a favorable frame in which they can argue and build a coalition of open-source advocates who believe in the open-source ideology, academics who want access to large models, and small AI labs and developers believing they will remain long-term competitive by fine-tuning smaller models and capturing various niche markets. LeCun and others attempt to portray themselves as the force of science and open inquiry, while the scaling labs proposing regulation are the evil big tech attempting regulatory capture. Because this seems to be the prefered anti-regulation frame, I will spend most time on this.

Apart from the mentioned groups, this narrative seems to be memetically fit in a “proudly cynical” crowd which assumes everything everyone is doing or saying is primarily self-interested and profit-driven.

Overall, the narrative has clear problems with explaining away inconvenient facts, including:

  • Thousands of academics calling for regulation are uncanny counter-evidence for x-risk being just a ploy by the top labs.

    • The narrative strategy seems to explain this by some of the senior academics just being deluded, and others also pursuing a self-interested strategy in expectation of funding.

  • Many of the people explaining AI risk now were publicly concerned about AI risk before founding labs, and at times when it was academically extremely unprofitable, sometimes sacrificing standard academic careers.

    • The narrative move is to just ignore this.

Also, many things are just assumed—for example, if the resulting regulation would be in the interest o frontrunners.

What could be memetically viable counter-arguments within the frame?

Personally, I tend to point out that motivation to avoid AI risk is completely compatible with self-interest. Leaders of AI labs also have skin in the game.

Also, recently I try to ask people to use the explanatory frame of ‘cui bono’ also to the other side, namely, Meta.

One possible hypothesis here is Meta just loves open source and wants everyone to flourish.

A more likely hypothesis is Meta wants to own the open-source ecosystem.

A more complex hypothesis is Meta doesn’t actually love open source that much but has a sensible, self-interested strategy, aimed at a dystopian outcome.

To understand the second option, it’s a prerequisite to comprehend the “commoditize the complement” strategy. This is a business approach where a company aims to drive down the cost or increase the availability of goods or services complementary to its own offerings. The outcome is an increase in the value of the company’s services.

Some famous successful examples of this strategy include Microsoft and PC hardware: PC hardware became a commodity, while Microsoft came close to monopolizing the OS, extracting huge profits. Or, Apple’s App Store: The complement to the phone is the apps. Apps have become a cheap commodity under immense competitive pressure, with Apple becoming the most valuable company in the world. Gwern has a great post on the topic.

The future Meta aims for is:

  • Meta becomes the platform of virtual reality (Metaverse).

  • People basically move there.

  • Most of the addictive VR content is generated by AIs, which is the complement.

For this strategy to succeed, it’s quite important to have a thriving ecosystem of VR producers, competing on which content will be the most addictive or hack human brains the fastest. Why an entire ecosystem? Because it fosters more creativity in brain hacking. Moreover, if the content was produced by Meta itself, it would be easier to regulate.

Different arguments try to argue against ideological open source absolutism: unless you believe absolutely every piece of information should be freely distributable, there are some conditions under which certain information should be public.

Other clearly. important narratives to map seem to be at least


“It’s about West vs. China”

Hopefully losing traction with China participating in the recent summit, and top scientists from China signing letters calling for regulation

”It’s about near term risks vs. hypothetical sci-fi”

Hopefully losing traction with anyone being able to interact with GPT4