My bias against AI writing is one of those things built by experience. For example, I want to know something, so I run a web search. I land on a page that has the unmistakable flavor of AI. And I am virtually always disappointed by the content. It typically looks superficially good, but it’s almost always low-effort bullshit. There will rarely be any actual insight, or facts beyond the most superficial and generic possible comments on the topic. And if the topic is even slightly off the beaten path, there will often be major errors and hallucinations. (For example, I was looking to find out how to upgrade the “class” of the newly introduced corvettes in No Man’s Sky, and the AI articles I read were completely fictitious.)
I suspect that AI writing is usually generic slop because AIs are built to predict the “most likely token” at each step, which strongly biases their output towards mediocrity and predictability. And similarly, when they lack knowledge, they make up something plausible.
So my distaste for AI writing is a Bayesian phenomenon. I am estimating P(this writing is worth my time|this was obviously written by an AI). And as I keep encountering more awful slop, I keep updating that prediction downwards.
Note that this doesn’t apply to non-native English speakers using AI to translate. There, the likelihood that the writing is worthwhile is based almost entirely on the original writing before translation.
All it would take to improve my opinion of AI writing would be to find myself regularly surprised and delighted by finding new, correct, and non-generic information in pieces obviously written by AIs.
I agree the most proliferate AI use case is SEO-spam/content-farming, and that almost no human input is invested to prune the mediocrity typically outputted. I see this as an alignment problem, rather than a problem with AI writing (ie, AI writing tools) in principle. Of course, AI writing out-of-the-box is bad, and a reasonable person should stop reading when they realize that anything they read fails to meet their quality bar.
My concern is when a Bayesian heuristic shifts to Bayesian epistemology, and people believe that “badness” is a property of AI writing, when in reality AI writing can become great. Particularly if you subscribe to a Popperian epistemology rooted in “alternating conjecture and criticism” as an engine for progress—because you can choose to use AI tools in a way to emulate that cycle.
Regarding:
All it would take to improve my opinion of AI writing would be to find myself regularly surprised and delighted by finding new, correct, and non-generic information in pieces obviously written by AIs.
My view is we won’t approach this anytime soon because the human curator’s input, if done well, will transform the writing to no longer seem “obviously written by AI”.
My bias against AI writing is one of those things built by experience. For example, I want to know something, so I run a web search. I land on a page that has the unmistakable flavor of AI. And I am virtually always disappointed by the content. It typically looks superficially good, but it’s almost always low-effort bullshit. There will rarely be any actual insight, or facts beyond the most superficial and generic possible comments on the topic. And if the topic is even slightly off the beaten path, there will often be major errors and hallucinations. (For example, I was looking to find out how to upgrade the “class” of the newly introduced corvettes in No Man’s Sky, and the AI articles I read were completely fictitious.)
I suspect that AI writing is usually generic slop because AIs are built to predict the “most likely token” at each step, which strongly biases their output towards mediocrity and predictability. And similarly, when they lack knowledge, they make up something plausible.
So my distaste for AI writing is a Bayesian phenomenon. I am estimating P(this writing is worth my time|this was obviously written by an AI). And as I keep encountering more awful slop, I keep updating that prediction downwards.
Note that this doesn’t apply to non-native English speakers using AI to translate. There, the likelihood that the writing is worthwhile is based almost entirely on the original writing before translation.
All it would take to improve my opinion of AI writing would be to find myself regularly surprised and delighted by finding new, correct, and non-generic information in pieces obviously written by AIs.
I agree the most proliferate AI use case is SEO-spam/content-farming, and that almost no human input is invested to prune the mediocrity typically outputted. I see this as an alignment problem, rather than a problem with AI writing (ie, AI writing tools) in principle. Of course, AI writing out-of-the-box is bad, and a reasonable person should stop reading when they realize that anything they read fails to meet their quality bar.
My concern is when a Bayesian heuristic shifts to Bayesian epistemology, and people believe that “badness” is a property of AI writing, when in reality AI writing can become great. Particularly if you subscribe to a Popperian epistemology rooted in “alternating conjecture and criticism” as an engine for progress—because you can choose to use AI tools in a way to emulate that cycle.
Regarding:
My view is we won’t approach this anytime soon because the human curator’s input, if done well, will transform the writing to no longer seem “obviously written by AI”.