Why not just boycott LLMs?

Epistemic Status: This post is an opinion I have had for some time and discussed with a few friends. Even though it’s been written up very hastily, I want to put it out there—because of its particular relevance today and also because I think the issue has not been discussed enough.

One of the tiny brain-takes a vegan sometimes encounters when talking to people unfamiliar with veganism is: Why don’t you just buy the meat? The animal is dead anyway. If you then roll your eyes and retort that you should at least try not to actively increase demand (to prevent future animal deaths), a reasonably smart person might reply—or might have replied a couple of years ago—that the number of vegans is so small anyway that their consumption behavior doesn’t really have any influence on the market.

But this can change over time. In Germany, where I live, the number of vegans doubled between 2016 and 2020[1]. Meat consumption has been steadily declining for several years[2], while the market for plant-based products has almost doubled just between 2018 and 2020[3].

Following this analogy, I wonder: Why has there been so little discussion (at least that I know of) about whether we as a community should boycott LLM-based products? Especially as we seem to agree that race dynamics are bad and having more time to do alignment research would be good?

What I mean by a boycott

Some examples of what I mean by that: Don’t sign up for Bing! Don’t use ChatGPT and don’t sign up for ChatGPT Plus! Or if you have to, use it as little as possible. Or adopt it as late as possible. If you can’t be a vegan, then be a flexitarian and reduce your consumption of animal products as much as possible. Don’t promote animal products, i.e. don’t post artwork generated by diffusion models on your 10k follower twitter account. In general, be pragmatic about it.

Perhaps the consumer behavior of a bunch of EAs will have little to no impact on the market. I tried to find some detailed market research on ChatGPT with no luck—but it seems plausible to me that tech-savvy people like those overrepresented in the EA community make up part of the target demographic, so a boycott might have a disproportionately large effect. And if the number of people aware of AI risk grows and a boycott becomes the norm, this effect could increase over the years.

There is a related but distinctive argument that a boycott—if visible enough—could create bad press for AI companies. This happened last year when a number of artists shared images protesting AI-generated art on the platform ArtStation[4]. ArtStation took them down, causing even more negative publicity.

Now is a good time to start a boycott

I would argue that the best time to start such a boycott would probably have been a couple of years ago (e.g. 2017, when DeepL was launched, or 2021, when GitHub Copilot was launched, or 2022, in the hype year of text-to-image models) and the second best time is now.

Why? Because at this moment the norms regarding the usage of LLMs in professional settings have not fully crystallized. Anecdotally, I know some people (including myself) who have been among the more hesitant adopters of ChatGPT. The mere fact that the servers were often down when I tried to use it contributed to a feeling of annoyance. And then there are large sections of the population, including older generations, who might be a bit more skeptical about/​ slower to adopt AI, but have a lot of decision-making power. As a result, not exhausting the possibilities of all available LLM applications does not lead to a strong disadvantage as yet. For example, an applicant for a PhD position this year might not yet compete exclusively with applicants who use LLMs to augment their research proposals. And the committee members are not yet used to an inflated quality standard. I think it is worth trying to delay the establishment of these new norms.

A short engagement with possible counterarguments

Besides the argument that the EA community is just too tiny to have any influence on market developments at all, I can think of two other counterarguments. One is that EAs might use LLMs for good; either directly (e.g. for research) or indirectly to empower them to do impactful things later on (for example, an EA person who augments their research proposal with ChatGPT might get accepted and continue to do impactful alignment research in their PhD! Yay!) I think it might be true that usage will become inevitable to compete with unfazed AI enthusiasts in the near future. For now, though, I think we should try to make sure we are not falling prey to motivated reasoning when arguing why we should definitely be using every shiny new toy that gets released as soon and as much as possible and for whatever task. It might just be exciting, or more convenient, or we don’t want to feel left behind. But maybe we could try to avoid the latter by using LLM applications consciously and sparingly—and sometimes just reading a blogpost on prompting strategies instead.

Some might also argue that timelines are too short anyway. After all, veganism and its effect on the market of animal products has only gradually gained momentum—and we may not have the time to build that. My answer to this is: maybe that’s true, but let’s just try anyway? There’s not much to lose (yet).

In sum, this post reflects my (slightly polemicized) opinion right now, and I can well imagine it changing. However, I think it would be useful for us collectively and privately to think about the utility and feasibility of boycotts before the next wave of LLM-powered products hits us.

  1. ^
  2. ^
  3. ^
  4. ^