I think there’s a weak moral panic brewing here in terms of LLM usage, leading people to jump to conclusions they otherwise wouldn’t, and assume “xyz person’s brain is malfunctioning due to LLM use” before considering other likely options. As an example, someone on my recent post implied that the reason I didn’t suggest using spellcheck for typo fixes was because my personal usage of LLMs was unhealthy, rather than (the actual reason) that using the browser’s inbuilt spellcheck as a first pass seemed so obvious to me that it didn’t bear mentioning.
Even if it’s true that LLM usage is notably bad for human cognition, it’s probably bad to frame specific critique as “ah, another person mind-poisoned” without pretty good evidence for that.
(This is distinct from critiquing text for being probably AI-generated, which I think is a necessary immune reaction around here.)
I guess I was imagining an implied “in expectation”, like predictions about second order effects of a certain degree of speculativeness are inaccurate enough that they’re basically useless, and so shouldn’t shift the expected value of an action. There are definitely exceptions and it’d depend how you formulate it, but “maybe my action was relevant to an emergent social phenomenon containing many other people with their own agency, and that phenomenon might be bad for abstract reasons, but it’s too soon to tell” just feels like… you couldn’t have anticipated that without being superhuman at forecasting, so you shouldn’t grade yourself on the basis of it happening (at least for the purposes of deciding how to motivate future behavior).