It’s perfectly fine to have strong personal preferences for what content you consume, and how it’s filtered, and to express these preferences. I don’t think it’s cool to make hyperbolic accusations of violence. It erodes the distinctions we make between different levels of hostility that help prevent conflicts from escalating. I don’t think undisclosed LLM assistance can even be fairly characterized as deceptive, much less violent.
I don’t think it’s hyperbolic at all; I think this is in fact a central instance of the category I’m gesturing at as “epistemic violence.” For instance, p-hacking, lying, manipulation, misleading data, etc. If you don’t think that category is meaningful or you dislike my name for it, can you be more specific about why? Or why this is not an instance? Another commenter @Guive objected to my usage of the word violence here because “words can’t be violence” which is I think a small skirmish of a wider culture war which I am really not trying to talk about.
To be explicit (again) I do not in any way want to imply that somehow a person using an LLM without disclosing it justifies physical violence against them. I also don’t think it’s intentionally an aggression. But depending on the case, it CAN BE seriously negligent towards the truth and community truth seeking norms, and in that careless negligence it can damage the epistemics of others, when a simple disclaimer / “epistemic status” / source would have been VERY low effort to add. I have to admit I hesitate to say this so explicitly a bit because many people I respect use LLMs extensively, and I am not categorically against this, and I feel slightly bad about potentially burdening or just insulting them—generally speaking I feel some degree of social pressure against saying this. And as a result I hesitate to back down from my framing, without a better reason than that it feels uncomfortable and some people don’t like it.
It’s perfectly fine to have strong personal preferences for what content you consume, and how it’s filtered, and to express these preferences. I don’t think it’s cool to make hyperbolic accusations of violence. It erodes the distinctions we make between different levels of hostility that help prevent conflicts from escalating. I don’t think undisclosed LLM assistance can even be fairly characterized as deceptive, much less violent.
I don’t think it’s hyperbolic at all; I think this is in fact a central instance of the category I’m gesturing at as “epistemic violence.” For instance, p-hacking, lying, manipulation, misleading data, etc. If you don’t think that category is meaningful or you dislike my name for it, can you be more specific about why? Or why this is not an instance? Another commenter @Guive objected to my usage of the word violence here because “words can’t be violence” which is I think a small skirmish of a wider culture war which I am really not trying to talk about.
To be explicit (again) I do not in any way want to imply that somehow a person using an LLM without disclosing it justifies physical violence against them. I also don’t think it’s intentionally an aggression. But depending on the case, it CAN BE seriously negligent towards the truth and community truth seeking norms, and in that careless negligence it can damage the epistemics of others, when a simple disclaimer / “epistemic status” / source would have been VERY low effort to add. I have to admit I hesitate to say this so explicitly a bit because many people I respect use LLMs extensively, and I am not categorically against this, and I feel slightly bad about potentially burdening or just insulting them—generally speaking I feel some degree of social pressure against saying this. And as a result I hesitate to back down from my framing, without a better reason than that it feels uncomfortable and some people don’t like it.