My assumption is that such boycott would create selective pressure against the Boycotters, and in favor of LLM enthusiasts, thus, making the Boycotters first irrelevant Luddites, then culturally extinct.
This is similar to how people who boycott social media for valid reasons essentially became outcasts and took these valid reasons with them, weakening their movement.
Boycotting AI is essentially a self-terminating meme, the harder you boycott, the less likely is the Boycott Meme to spread. Its the equivalent to trying to boycott literacy with newspaper articles decrying the danger of the written word.
I think the difference with veganism is that vegans argue that there’s no downside to being vegan (the argument is that vegan food is still tasty, healthy, and affordable) and there’s very few high-income jobs that would be harder to get as a vegan (maybe CEO of Tyson Foods?). In an alternate world where cooking meat-based-meals is one of the highest-paying and highest-status jobs, compromising your ability to do it by refusing to eat meat might be less effective than eating enough meat to stay good at your job while using your free time and income to work for systemic change.
Using ChatGPT etc gives people such an advantage in (some) jobs and is easy to use “secretly” that it seems highly unlikely that a significant amount of people would boycott it.
My guess is that at most maybe 1-10% of a population would actually adhere to a boycott, and those who do would be in a much worse position to work on AI Safety and other important matters.
It is strange to propose a boycott without saying why. Why are you against people using AI generation tools?
Personally, I find the text that ChatGPT generates useless and unpleasant to read, and have downvoted it on suspicion several times on LessWrong already. (It doesn’t much matter whether my suspicions were correct, the text quality was downvoteworthy anyway for its leaden vagueness and platitudinousness, and high-school essay structure.)
BTW, ArtStation is no longer removing “No to AI” images: at least, I see a bunch of them there. It now has a policy that AI generated content is allowed on the ArtStation marketplace but must be tagged as such, and users uploading their art can tag it to indicate that it is not to be used as training data for any AI.
My assumption is that such boycott would create selective pressure against the Boycotters, and in favor of LLM enthusiasts, thus, making the Boycotters first irrelevant Luddites, then culturally extinct.
This is similar to how people who boycott social media for valid reasons essentially became outcasts and took these valid reasons with them, weakening their movement.
Boycotting AI is essentially a self-terminating meme, the harder you boycott, the less likely is the Boycott Meme to spread. Its the equivalent to trying to boycott literacy with newspaper articles decrying the danger of the written word.
I think the difference with veganism is that vegans argue that there’s no downside to being vegan (the argument is that vegan food is still tasty, healthy, and affordable) and there’s very few high-income jobs that would be harder to get as a vegan (maybe CEO of Tyson Foods?). In an alternate world where cooking meat-based-meals is one of the highest-paying and highest-status jobs, compromising your ability to do it by refusing to eat meat might be less effective than eating enough meat to stay good at your job while using your free time and income to work for systemic change.
Using ChatGPT etc gives people such an advantage in (some) jobs and is easy to use “secretly” that it seems highly unlikely that a significant amount of people would boycott it.
My guess is that at most maybe 1-10% of a population would actually adhere to a boycott, and those who do would be in a much worse position to work on AI Safety and other important matters.
It is strange to propose a boycott without saying why. Why are you against people using AI generation tools?
Personally, I find the text that ChatGPT generates useless and unpleasant to read, and have downvoted it on suspicion several times on LessWrong already. (It doesn’t much matter whether my suspicions were correct, the text quality was downvoteworthy anyway for its leaden vagueness and platitudinousness, and high-school essay structure.)
BTW, ArtStation is no longer removing “No to AI” images: at least, I see a bunch of them there. It now has a policy that AI generated content is allowed on the ArtStation marketplace but must be tagged as such, and users uploading their art can tag it to indicate that it is not to be used as training data for any AI.
Boycotting LLMs reduces the financial benefit of doing research that is (EDIT: maybe) upstream to AGI in the tech tree.