These political asides are of a piece with the philosophical jabs and brags he makes in his philosophical essays.
That doesn’t actually rebut my observation, unless you are claiming to have seen jibes and sneering as dumb and cliche as those in his writings from before ChatGPT (Nov 2022).
I just looked at your other alleged example of AI-generated polemic (from alexkesin.com), and I think evidence is lacking there too. That a link contains a UTM parameter referring to ChatGPT, tells us only that this link was provided in ChatGPT output, it doesn’t tell us that the text around the link was written by ChatGPT as well. As for the article itself, I find nothing in its verbal style that is outside the range of human authorship. I wouldn’t even call it a bad essay, just perhaps dense, colorful, and polemical. People do choose to write this way, because they want to be stylish and vivid.
I hear that the use of emdashes is far more dispositive; as a sign of AI authorship, it’s up there alongside frequent “delving”. But even the emdash has its human fans (e.g. the Chicago Manual of Style). It can be a sign of a cultivated writer, not an AI… Forensic cyber-philology is still an art with a lot of judgment calls.
That a link contains a UTM parameter referring to ChatGPT, tells us only that this link was provided in ChatGPT output, it doesn’t tell us that the text around the link was written by ChatGPT as well. As for the article itself, I find nothing in its verbal style that is outside the range of human authorship… But even the emdash has its human fans (e.g. the Chicago Manual of Style). It can be a sign of a cultivated writer
I note, that aside from bending over backwards to excuse multiple blatant signs of a common phenomena which requires little evidence to promote to high posterior confidence, you still are not responding to what I said about BB and have instead chosen to go off on a tangent.
So again. Are you claiming to have seen jibes and sneering as dumb and cliche as those in his writings from before ChatGPT (Nov 2022), and thus rebutting my observation about how it sounds like he is gracelessly using ChatGPT?
You say in another comment that you’re going around claiming to detect LLM use in many places. I found the reasons that you gave in the case of BB, bizarrely mundane. You linked to another analysis of yours as an example of hidden LLM use, so I went to check it out. You have more evidence in the case of Alex Kesin, *maybe* even a preponderance of evidence. But there really are two hypotheses to consider, even in that case. One is that Kesin is a writer who naturally writes that way, and whose use of ChatGPT is limited to copying links without trimming them. The other is that Kesin’s workflow does include the use of ChatGPT in composition or editing, and that this gave rise to certain telltale stylistic features.
Are you claiming to have seen jibes and sneering as dumb and cliche as those in his writings from before ChatGPT (Nov 2022)
The essay in question (“Don’t Eat Honey”) contains, by my count, two such sneers, one asserting that Donald Trump is stupid, the other asserting that Curtis Yarvin is boring. Do you not think that we could, for example, go back to the corpus of Bush-era American-college-student writings and find similar attacks on Bush administration figures, inserted into essays that are not about politics?
I am a bit worried about how fatally seductive I could find a debate about this topic to be. Clearly LLM use is widespread, and its signs can be subtle. Developing a precise taxonomy of the ways in which LLMs can be part of the writing process; developing a knowledge of “blatant signs” of LLM use and a sense for the subtle signs too; debating whether something is a false positive; learning how to analyze the innumerable aspects of the genuinely human corpus that have a bearing on these probabilistic judgments… It would be empowering to achieve sophistication on this topic, but I don’t know if I can spare the time to achieve that.
I found the reasons that you gave in the case of BB, bizarrely mundane.
It is in fact a mundane topic, because you are surrounded by AI slop and people relying heavily on ChatGPT writing, making it a mundane every day observation infiltrating even the heights of wordcel culture (I’ve now started seeing blatant ChatGPTisms in the New Yorker and New York Times), which is why you are wrong to bend over backwards to require extraordinary evidence for what have become ordinary claims (and also why your tangents and evasions are so striking).
So, I am again going to ignore those, and will ask you again—you were sure that BB was not using ChatGPT, despite the linguistic tells and commonness of it:
These political asides are of a piece with the philosophical jabs and brags he makes in his philosophical essays.
That doesn’t actually rebut my observation, unless you are claiming to have seen jibes and sneering as dumb and cliche as those in his writings from before ChatGPT (Nov 2022).
Let me first try to convey how this conversation appears from my perspective. I don’t think I’ve ever debated directly with you about anything, but I have an impression of you as doing solid work in the areas of your interest.
Then, I run across you alleging that BB is using AI to write some of his articles. This catches my attention because I do keep an eye on BB’s work. Furthermore, your reason for supposing that he is using AI seems bizarre to me—you think his (very occasional) “sneering” is too “dumb and cliche” to be the work of human hands. Let’s look at an example:
bees seem to matter a surprising amount. They are far more cognitively sophisticated than most other insects, having about a million neurons—far more than our current president.
If that strikes you as something that a human being would never spontaneously write, I don’t know what to say. Surely human comedians say similar things hundreds of times every month? It’s also exactly the kind of thing that a smart-aleck “science communicator” with a progressive audience might say, don’t you think? BB may be neither of those things, but he’s a popular Gen-Z philosophy blogger who was a high-school debater, so he’s not a thousand light-years from either of those universes of discourse.
I had a look at your other alleged example of AI writing, and I also found other callouts from your recent reddit posts. Generally I think your judgments were reasonable, but not in this case.
That is true. I have not, nor do I intend to.
That doesn’t actually rebut my observation, unless you are claiming to have seen jibes and sneering as dumb and cliche as those in his writings from before ChatGPT (Nov 2022).
How about the fact that the opinions in the inserted asides are his actual opinions? If they were randomly generated, they wouldn’t be.
I’m not sure I believe they are his ‘actual opinions’, but it doesn’t matter to my points.
No one, particularly me, said they were ‘randomly generated’, so that again does not rebut any of my observations.
I just looked at your other alleged example of AI-generated polemic (from alexkesin.com), and I think evidence is lacking there too. That a link contains a UTM parameter referring to ChatGPT, tells us only that this link was provided in ChatGPT output, it doesn’t tell us that the text around the link was written by ChatGPT as well. As for the article itself, I find nothing in its verbal style that is outside the range of human authorship. I wouldn’t even call it a bad essay, just perhaps dense, colorful, and polemical. People do choose to write this way, because they want to be stylish and vivid.
I hear that the use of emdashes is far more dispositive; as a sign of AI authorship, it’s up there alongside frequent “delving”. But even the emdash has its human fans (e.g. the Chicago Manual of Style). It can be a sign of a cultivated writer, not an AI… Forensic cyber-philology is still an art with a lot of judgment calls.
I note, that aside from bending over backwards to excuse multiple blatant signs of a common phenomena which requires little evidence to promote to high posterior confidence, you still are not responding to what I said about BB and have instead chosen to go off on a tangent.
So again. Are you claiming to have seen jibes and sneering as dumb and cliche as those in his writings from before ChatGPT (Nov 2022), and thus rebutting my observation about how it sounds like he is gracelessly using ChatGPT?
You say in another comment that you’re going around claiming to detect LLM use in many places. I found the reasons that you gave in the case of BB, bizarrely mundane. You linked to another analysis of yours as an example of hidden LLM use, so I went to check it out. You have more evidence in the case of Alex Kesin, *maybe* even a preponderance of evidence. But there really are two hypotheses to consider, even in that case. One is that Kesin is a writer who naturally writes that way, and whose use of ChatGPT is limited to copying links without trimming them. The other is that Kesin’s workflow does include the use of ChatGPT in composition or editing, and that this gave rise to certain telltale stylistic features.
The essay in question (“Don’t Eat Honey”) contains, by my count, two such sneers, one asserting that Donald Trump is stupid, the other asserting that Curtis Yarvin is boring. Do you not think that we could, for example, go back to the corpus of Bush-era American-college-student writings and find similar attacks on Bush administration figures, inserted into essays that are not about politics?
I am a bit worried about how fatally seductive I could find a debate about this topic to be. Clearly LLM use is widespread, and its signs can be subtle. Developing a precise taxonomy of the ways in which LLMs can be part of the writing process; developing a knowledge of “blatant signs” of LLM use and a sense for the subtle signs too; debating whether something is a false positive; learning how to analyze the innumerable aspects of the genuinely human corpus that have a bearing on these probabilistic judgments… It would be empowering to achieve sophistication on this topic, but I don’t know if I can spare the time to achieve that.
It is in fact a mundane topic, because you are surrounded by AI slop and people relying heavily on ChatGPT writing, making it a mundane every day observation infiltrating even the heights of wordcel culture (I’ve now started seeing blatant ChatGPTisms in the New Yorker and New York Times), which is why you are wrong to bend over backwards to require extraordinary evidence for what have become ordinary claims (and also why your tangents and evasions are so striking).
So, I am again going to ignore those, and will ask you again—you were sure that BB was not using ChatGPT, despite the linguistic tells and commonness of it:
I am still waiting for an answer here.
Let me first try to convey how this conversation appears from my perspective. I don’t think I’ve ever debated directly with you about anything, but I have an impression of you as doing solid work in the areas of your interest.
Then, I run across you alleging that BB is using AI to write some of his articles. This catches my attention because I do keep an eye on BB’s work. Furthermore, your reason for supposing that he is using AI seems bizarre to me—you think his (very occasional) “sneering” is too “dumb and cliche” to be the work of human hands. Let’s look at an example:
If that strikes you as something that a human being would never spontaneously write, I don’t know what to say. Surely human comedians say similar things hundreds of times every month? It’s also exactly the kind of thing that a smart-aleck “science communicator” with a progressive audience might say, don’t you think? BB may be neither of those things, but he’s a popular Gen-Z philosophy blogger who was a high-school debater, so he’s not a thousand light-years from either of those universes of discourse.
I had a look at your other alleged example of AI writing, and I also found other callouts from your recent reddit posts. Generally I think your judgments were reasonable, but not in this case.