I don’t think it’s hyperbolic at all; I think this is in fact a central instance of the category I’m gesturing at as “epistemic violence.” For instance, p-hacking, lying, manipulation, misleading data, etc. If you don’t think that category is meaningful or you dislike my name for it, can you be more specific about why? Or why this is not an instance? Another commenter @Guive objected to my usage of the word violence here because “words can’t be violence” which is I think a small skirmish of a wider culture war which I am really not trying to talk about.
To be explicit (again) I do not in any way want to imply that somehow a person using an LLM without disclosing it justifies physical violence against them. I also don’t think it’s intentionally an aggression. But depending on the case, it CAN BE seriously negligent towards the truth and community truth seeking norms, and in that careless negligence it can damage the epistemics of others, when a simple disclaimer / “epistemic status” / source would have been VERY low effort to add. I have to admit I hesitate to say this so explicitly a bit because many people I respect use LLMs extensively, and I am not categorically against this, and I feel slightly bad about potentially burdening or just insulting them—generally speaking I feel some degree of social pressure against saying this. And as a result I hesitate to back down from my framing, without a better reason than that it feels uncomfortable and some people don’t like it.
Thanks for going into more detail. I don’t think “epistemic violence” is a good term for this category:
Violence generally describes intentional harm, where as p-hacking and misleading data are not always intentional
Violence generally describes harm that meets a certain threshold—flicking someone is technically violent, but it would be hyperbolic to describe it as such without more context.
I think a better term for this broad category might be “epistemic pollution”, as it describes filling the information environment with negative value stuff. I would be comfortable describing e.g. a confidence scheme or an impersonation scam as epistemic violence, although there would have to be some point to doing so.
In general, I’m skeptical of coining a novel term with strong connotations to try to argue a point—it’s basically the noncentral fallacy.
I don’t think it’s hyperbolic at all; I think this is in fact a central instance of the category I’m gesturing at as “epistemic violence.” For instance, p-hacking, lying, manipulation, misleading data, etc. If you don’t think that category is meaningful or you dislike my name for it, can you be more specific about why? Or why this is not an instance? Another commenter @Guive objected to my usage of the word violence here because “words can’t be violence” which is I think a small skirmish of a wider culture war which I am really not trying to talk about.
To be explicit (again) I do not in any way want to imply that somehow a person using an LLM without disclosing it justifies physical violence against them. I also don’t think it’s intentionally an aggression. But depending on the case, it CAN BE seriously negligent towards the truth and community truth seeking norms, and in that careless negligence it can damage the epistemics of others, when a simple disclaimer / “epistemic status” / source would have been VERY low effort to add. I have to admit I hesitate to say this so explicitly a bit because many people I respect use LLMs extensively, and I am not categorically against this, and I feel slightly bad about potentially burdening or just insulting them—generally speaking I feel some degree of social pressure against saying this. And as a result I hesitate to back down from my framing, without a better reason than that it feels uncomfortable and some people don’t like it.
Thanks for going into more detail. I don’t think “epistemic violence” is a good term for this category:
Violence generally describes intentional harm, where as p-hacking and misleading data are not always intentional
Violence generally describes harm that meets a certain threshold—flicking someone is technically violent, but it would be hyperbolic to describe it as such without more context.
I think a better term for this broad category might be “epistemic pollution”, as it describes filling the information environment with negative value stuff. I would be comfortable describing e.g. a confidence scheme or an impersonation scam as epistemic violence, although there would have to be some point to doing so.
In general, I’m skeptical of coining a novel term with strong connotations to try to argue a point—it’s basically the noncentral fallacy.