This is not what it feels like to me. What it feels like is that I am reading a low quality SEO optimized human website that puts boilerplate like “What is an X? What is the definition of an X? Read on to learn what an X is...” or a news article that launches into a personal anecdote that I don’t care about—I expect low density vapid ‘slop’ and want to skip to the good parts.
LLM writing is not so transparently doing that, yet it seems like my brain can (correctly!) tell it’s low density and so skips past. Unfortunately it’s less likely to somewhere give the tidbits in a higher density way (like the recipe part of a recipe site is usually fine), and more likely to intersperse the tidbits in the low density so that it’s like a middle ground of density. Human articles like this tend to have large chunks you can entirely skip to get to the good parts; LLMs with this problem feel like I have to read the whole thing just to mine 2 sentences worth of info.
Thinking of times while reading/hearing human words where I felt the manipulation worry you’re describing—it feels like it takes the more reflective parts of me to notice, to then say “hey uhh am I being ‘hacked’ and my feelings are wrong here?” which then causes the rent of me to go “oh shit oh shit, time for vigilance and sticking to past thoughts and the outside view”. I don’t feel this about LLMs—at worst I get the “oh sometimes the links are fake or the jargon is wrong, in a way where a human would typically not just invent jargon and then lie about it being standard”—it doesn’t feel like manipulation, just more like a human flailing about trying to confabulate except more shameless and it’s harder to read their ‘tone’ to tell.
This is not what it feels like to me. What it feels like is that I am reading a low quality SEO optimized human website that puts boilerplate like “What is an X? What is the definition of an X? Read on to learn what an X is...” or a news article that launches into a personal anecdote that I don’t care about—I expect low density vapid ‘slop’ and want to skip to the good parts.
LLM writing is not so transparently doing that, yet it seems like my brain can (correctly!) tell it’s low density and so skips past. Unfortunately it’s less likely to somewhere give the tidbits in a higher density way (like the recipe part of a recipe site is usually fine), and more likely to intersperse the tidbits in the low density so that it’s like a middle ground of density. Human articles like this tend to have large chunks you can entirely skip to get to the good parts; LLMs with this problem feel like I have to read the whole thing just to mine 2 sentences worth of info.
Thinking of times while reading/hearing human words where I felt the manipulation worry you’re describing—it feels like it takes the more reflective parts of me to notice, to then say “hey uhh am I being ‘hacked’ and my feelings are wrong here?” which then causes the rent of me to go “oh shit oh shit, time for vigilance and sticking to past thoughts and the outside view”. I don’t feel this about LLMs—at worst I get the “oh sometimes the links are fake or the jargon is wrong, in a way where a human would typically not just invent jargon and then lie about it being standard”—it doesn’t feel like manipulation, just more like a human flailing about trying to confabulate except more shameless and it’s harder to read their ‘tone’ to tell.