To discredit writing because you suspect it’s AI is itself a sloppy heuristic—you should discredit writing because it’s bad.
A big reason for me, and which doesn’t have to do with the quality, is that I can no longer trust that the human posting it actually had the sorts of thoughts that would be necessary to write the post.
And if they haven’t had those thoughts, what’s the point of engaging? [1]
Sure, maybe it’s still worth engaging for persuasion or status-y type reasons, but that’s usually not what interests me. Or just let me explicitly engage with the AI!
Imagine how obnoxious it would be if I had someone ghostwriting half of my comments and posts here… the fact that it’s AI isn’t even particularly relevant.
That’s a great point. Seems like the category of ‘informational writing’ is not so homogeneous. There are at least: * Evaluative contexts: “I am judging the author (are they worth engaging with? anything from cover letter screening to replying on blogposts), and their writing is the only available proxy to judge the author by.” * Expository contexts: “I am trying to learn an idea as best as I can.”
Take the example of ‘Falling in love feels good because of endogenous, not exogenous factors’. Even if written by AI, that could be useful + interesting in an expository context, for someone to whom that is new and subversive. But realizing it was written by AI would have a negative impact on any reader in an evaluative context.
I suppose I hadn’t considered the distinction at all, and assumed a focus on expository writing—though ironically, most blogposts are engaged with as evaluative instead!
Though, I think the point of the essay still stands, which is—if AI was used well enough (as judged by a reader’s independent evaluation of how interesting/good the idea is), that is still value delivered to the audience, which may not have existed otherwise.
Consider your footnote: “Imagine how obnoxious it would be if I had someone ghostwriting half of my comments and posts here”—yes, it would be annoying to realize you were debating an automaton. But if you used AI to (1) write clarifying analogies (suppose your “ghostwriting half of my comments and posts here” example was the sort of example an AI could help surface), or (2) develop your opinions beyond what you previously held, I’m not sure a reader would get too annoyed!
But if you used AI to (1) write clarifying analogies (suppose your “ghostwriting half of my comments and posts here” example was the sort of example an AI could help surface), or (2) develop your opinions beyond what you previously held, I’m not sure a reader would get too annoyed!
Right, but neither of those cases necessitate using the AI’s writing, which is the crucial distinction.
If I notice the writing is by AI, I don’t know to what extent the purported author is using it to do their thinking for them. Maybe it’s totally innocuous, like a non-native English speaker using it for translation. But it might also just be something lazily copied and pasted, not something they even read carefully. It scales much better for the lazy writers than the earnest writers.
Another thing is that it’s easy to fool yourself into thinking the AI is just writing your own thoughts for you, but which turns out to be an illusion (I speak from experience).
A big reason for me, and which doesn’t have to do with the quality, is that I can no longer trust that the human posting it actually had the sorts of thoughts that would be necessary to write the post.
And if they haven’t had those thoughts, what’s the point of engaging? [1]
Sure, maybe it’s still worth engaging for persuasion or status-y type reasons, but that’s usually not what interests me. Or just let me explicitly engage with the AI!
Imagine how obnoxious it would be if I had someone ghostwriting half of my comments and posts here… the fact that it’s AI isn’t even particularly relevant.
That’s a great point. Seems like the category of ‘informational writing’ is not so homogeneous. There are at least:
* Evaluative contexts: “I am judging the author (are they worth engaging with? anything from cover letter screening to replying on blogposts), and their writing is the only available proxy to judge the author by.”
* Expository contexts: “I am trying to learn an idea as best as I can.”
Take the example of ‘Falling in love feels good because of endogenous, not exogenous factors’. Even if written by AI, that could be useful + interesting in an expository context, for someone to whom that is new and subversive. But realizing it was written by AI would have a negative impact on any reader in an evaluative context.
I suppose I hadn’t considered the distinction at all, and assumed a focus on expository writing—though ironically, most blogposts are engaged with as evaluative instead!
Though, I think the point of the essay still stands, which is—if AI was used well enough (as judged by a reader’s independent evaluation of how interesting/good the idea is), that is still value delivered to the audience, which may not have existed otherwise.
Consider your footnote: “Imagine how obnoxious it would be if I had someone ghostwriting half of my comments and posts here”—yes, it would be annoying to realize you were debating an automaton. But if you used AI to (1) write clarifying analogies (suppose your “ghostwriting half of my comments and posts here” example was the sort of example an AI could help surface), or (2) develop your opinions beyond what you previously held, I’m not sure a reader would get too annoyed!
Right, but neither of those cases necessitate using the AI’s writing, which is the crucial distinction.
If I notice the writing is by AI, I don’t know to what extent the purported author is using it to do their thinking for them. Maybe it’s totally innocuous, like a non-native English speaker using it for translation. But it might also just be something lazily copied and pasted, not something they even read carefully. It scales much better for the lazy writers than the earnest writers.
Another thing is that it’s easy to fool yourself into thinking the AI is just writing your own thoughts for you, but which turns out to be an illusion (I speak from experience).