In addition to the object-level problems with the post, the post also just cites wrong statistics (claiming that 97% of years of animal life are due to honey farming if you ignore insects, which is just plainly wrong, shrimp alone are like 10%), and also it just randomly throws in random insults at random political figures, which is clearly against the norm on LessWrong (“having about a million neurons—far more than our current president” and “That’s about an entire lifetime of a human, spent entirely on drudgery. That’s like being forced to read an entire Curtis Yarvin article from start to finish. And that is wildly conservative.”).
I have sympathy for some of the underlying analysis, but this really isn’t a good post.
(Did ChatGPT come up with that interpretation of that statistic and Bentham’s Bulldog is too lazy and careless, or dishonest, to notice that that seems like a rather extreme number and check it?)
I’ve fixed the 97% statistic! I agree that was a stupid error to make. It wasn’t from LLMs and one bit of evidence for this is that the statistic isn’t online anymore, so an LLM wouldn’t find it. In fact, I remembered the statistic from an old round of high school debate, and emailed someone to find the link. I would be happy to send you an email of the correspondence if you are skeptical.
I am quite certain that I did not use LLM’s in composing the post. Now, I don’t think the mere example of that joke is very convincing evidence that I used LLMs. Would AI really make a Yarvin joke or a joke about Trump not having neurons? Hard to imagine. As for use of em-dashes, as you can see if you read my old posts from before the dawn of AI, I used them a lot too! I’ve also made a similar comment before in a way that pretty clearly doesn’t look AI generated https://benthams.substack.com/p/lyman-stone-continues-being-dumb?utm_source=publication-search:
“Lyman then repeats his argument about shrimp having few neurons, once again ignoring the abundance of evidence that neuron count is a bad proxy for moral significance. Perhaps the shrimp aren’t the only ones with few neurons…”
For reference, I just asked chat-GPT to write a polemic in my style and it was not very similar to what you suggest—here it is (it also didn’t make much sense):
On the Absurdity of Caring Less About the Future Just Because You’ll Be Dead
Every so often, someone will tell me—usually with the smug air of someone who thinks they’ve just said something deeply profound—that “Well, I care about the future, but not too far into the future. After all, I’ll be gone by then.”
This is supposed to be self-evident wisdom, a mic-drop moment where the utilitarian shuts up, nods sadly, and says, “Yes, of course, how could I have forgotten: once you’re dead, ethics ceases to function.”
But here’s the thing: no, it doesn’t. You can die without taking the moral law with you.
If you think people matter, they keep mattering after you’re gone. If you think suffering is bad, it remains bad even in the year 3000. You don’t get to mark an expiration date on morality like it’s a jug of milk.
Imagine applying this logic in any other domain:
“I oppose slavery in 100 years, but in 200 years? Pfft, who cares—won’t be my problem.”
Or:
“I’d like the cure for cancer to be found in my lifetime, but if it comes a decade after my death, well, frankly, let the tumors win.”
The bizarre thing is that the people who say this aren’t usually sociopaths. They’ll donate to help children they’ll never meet, they’ll praise great reformers who died centuries ago—but as soon as you point to future people they’ll never meet, it’s all “Eh, let them fend for themselves.”
It’s time to call this what it is: a lazy, self-exonerating dodge. The moral circle doesn’t collapse when you die. Your concern for the world shouldn’t come with a tombstone-shaped asterisk. The universe will keep running whether or not you’re around to watch, and the future will be inhabited by beings capable of joy and suffering. That is reason enough to care—no matter how many centuries or millennia away they are.
Because, let’s face it, if morality only applies while you’re alive, you’re not really doing ethics. You’re just doing public relations for your lifespan.
Disagree from me. I feel like you haven’t read much BB. These political asides are of a piece with the philosophical jabs and brags he makes in his philosophical essays.
These political asides are of a piece with the philosophical jabs and brags he makes in his philosophical essays.
That doesn’t actually rebut my observation, unless you are claiming to have seen jibes and sneering as dumb and cliche as those in his writings from before ChatGPT (Nov 2022).
I just looked at your other alleged example of AI-generated polemic (from alexkesin.com), and I think evidence is lacking there too. That a link contains a UTM parameter referring to ChatGPT, tells us only that this link was provided in ChatGPT output, it doesn’t tell us that the text around the link was written by ChatGPT as well. As for the article itself, I find nothing in its verbal style that is outside the range of human authorship. I wouldn’t even call it a bad essay, just perhaps dense, colorful, and polemical. People do choose to write this way, because they want to be stylish and vivid.
I hear that the use of emdashes is far more dispositive; as a sign of AI authorship, it’s up there alongside frequent “delving”. But even the emdash has its human fans (e.g. the Chicago Manual of Style). It can be a sign of a cultivated writer, not an AI… Forensic cyber-philology is still an art with a lot of judgment calls.
That a link contains a UTM parameter referring to ChatGPT, tells us only that this link was provided in ChatGPT output, it doesn’t tell us that the text around the link was written by ChatGPT as well. As for the article itself, I find nothing in its verbal style that is outside the range of human authorship… But even the emdash has its human fans (e.g. the Chicago Manual of Style). It can be a sign of a cultivated writer
I note, that aside from bending over backwards to excuse multiple blatant signs of a common phenomena which requires little evidence to promote to high posterior confidence, you still are not responding to what I said about BB and have instead chosen to go off on a tangent.
So again. Are you claiming to have seen jibes and sneering as dumb and cliche as those in his writings from before ChatGPT (Nov 2022), and thus rebutting my observation about how it sounds like he is gracelessly using ChatGPT?
You say in another comment that you’re going around claiming to detect LLM use in many places. I found the reasons that you gave in the case of BB, bizarrely mundane. You linked to another analysis of yours as an example of hidden LLM use, so I went to check it out. You have more evidence in the case of Alex Kesin, *maybe* even a preponderance of evidence. But there really are two hypotheses to consider, even in that case. One is that Kesin is a writer who naturally writes that way, and whose use of ChatGPT is limited to copying links without trimming them. The other is that Kesin’s workflow does include the use of ChatGPT in composition or editing, and that this gave rise to certain telltale stylistic features.
Are you claiming to have seen jibes and sneering as dumb and cliche as those in his writings from before ChatGPT (Nov 2022)
The essay in question (“Don’t Eat Honey”) contains, by my count, two such sneers, one asserting that Donald Trump is stupid, the other asserting that Curtis Yarvin is boring. Do you not think that we could, for example, go back to the corpus of Bush-era American-college-student writings and find similar attacks on Bush administration figures, inserted into essays that are not about politics?
I am a bit worried about how fatally seductive I could find a debate about this topic to be. Clearly LLM use is widespread, and its signs can be subtle. Developing a precise taxonomy of the ways in which LLMs can be part of the writing process; developing a knowledge of “blatant signs” of LLM use and a sense for the subtle signs too; debating whether something is a false positive; learning how to analyze the innumerable aspects of the genuinely human corpus that have a bearing on these probabilistic judgments… It would be empowering to achieve sophistication on this topic, but I don’t know if I can spare the time to achieve that.
I found the reasons that you gave in the case of BB, bizarrely mundane.
It is in fact a mundane topic, because you are surrounded by AI slop and people relying heavily on ChatGPT writing, making it a mundane every day observation infiltrating even the heights of wordcel culture (I’ve now started seeing blatant ChatGPTisms in the New Yorker and New York Times), which is why you are wrong to bend over backwards to require extraordinary evidence for what have become ordinary claims (and also why your tangents and evasions are so striking).
So, I am again going to ignore those, and will ask you again—you were sure that BB was not using ChatGPT, despite the linguistic tells and commonness of it:
These political asides are of a piece with the philosophical jabs and brags he makes in his philosophical essays.
That doesn’t actually rebut my observation, unless you are claiming to have seen jibes and sneering as dumb and cliche as those in his writings from before ChatGPT (Nov 2022).
Let me first try to convey how this conversation appears from my perspective. I don’t think I’ve ever debated directly with you about anything, but I have an impression of you as doing solid work in the areas of your interest.
Then, I run across you alleging that BB is using AI to write some of his articles. This catches my attention because I do keep an eye on BB’s work. Furthermore, your reason for supposing that he is using AI seems bizarre to me—you think his (very occasional) “sneering” is too “dumb and cliche” to be the work of human hands. Let’s look at an example:
bees seem to matter a surprising amount. They are far more cognitively sophisticated than most other insects, having about a million neurons—far more than our current president.
If that strikes you as something that a human being would never spontaneously write, I don’t know what to say. Surely human comedians say similar things hundreds of times every month? It’s also exactly the kind of thing that a smart-aleck “science communicator” with a progressive audience might say, don’t you think? BB may be neither of those things, but he’s a popular Gen-Z philosophy blogger who was a high-school debater, so he’s not a thousand light-years from either of those universes of discourse.
I had a look at your other alleged example of AI writing, and I also found other callouts from your recent reddit posts. Generally I think your judgments were reasonable, but not in this case.
(I had missed some of this stuff because I skimmed some of the post, which does update me on how bad it was. I think there is basically one interesting claim in the post “bees are actually noticeably more cognitively interesting than you probably thought, and this should have some kind of implication worth thinking about”. I think I find that more valuable than Oliver does, but not very confident about whether “one interesting point among a bunch of really bad argumentation” should be more like −2 to 3 karma or more like −10)
In addition to the object-level problems with the post, the post also just cites wrong statistics (claiming that 97% of years of animal life are due to honey farming if you ignore insects, which is just plainly wrong, shrimp alone are like 10%), and also it just randomly throws in random insults at random political figures, which is clearly against the norm on LessWrong (“having about a million neurons—far more than our current president” and “That’s about an entire lifetime of a human, spent entirely on drudgery. That’s like being forced to read an entire Curtis Yarvin article from start to finish. And that is wildly conservative.”).
I have sympathy for some of the underlying analysis, but this really isn’t a good post.
Also a sign of graceless LLM writing, incidentally. Those are the sorts of phrases you get when you tell ChatGPT to write polemic; cf. https://news.ycombinator.com/item?id=44384138 on https://www.alexkesin.com/p/the-hollow-men-of-hims
(Did ChatGPT come up with that interpretation of that statistic and Bentham’s Bulldog is too lazy and careless, or dishonest, to notice that that seems like a rather extreme number and check it?)
I’ve fixed the 97% statistic! I agree that was a stupid error to make. It wasn’t from LLMs and one bit of evidence for this is that the statistic isn’t online anymore, so an LLM wouldn’t find it. In fact, I remembered the statistic from an old round of high school debate, and emailed someone to find the link. I would be happy to send you an email of the correspondence if you are skeptical.
I am quite certain that I did not use LLM’s in composing the post. Now, I don’t think the mere example of that joke is very convincing evidence that I used LLMs. Would AI really make a Yarvin joke or a joke about Trump not having neurons? Hard to imagine. As for use of em-dashes, as you can see if you read my old posts from before the dawn of AI, I used them a lot too! I’ve also made a similar comment before in a way that pretty clearly doesn’t look AI generated https://benthams.substack.com/p/lyman-stone-continues-being-dumb?utm_source=publication-search:
“Lyman then repeats his argument about shrimp having few neurons, once again ignoring the abundance of evidence that neuron count is a bad proxy for moral significance. Perhaps the shrimp aren’t the only ones with few neurons…”
For reference, I just asked chat-GPT to write a polemic in my style and it was not very similar to what you suggest—here it is (it also didn’t make much sense):
On the Absurdity of Caring Less About the Future Just Because You’ll Be Dead
Every so often, someone will tell me—usually with the smug air of someone who thinks they’ve just said something deeply profound—that “Well, I care about the future, but not too far into the future. After all, I’ll be gone by then.”
This is supposed to be self-evident wisdom, a mic-drop moment where the utilitarian shuts up, nods sadly, and says, “Yes, of course, how could I have forgotten: once you’re dead, ethics ceases to function.”
But here’s the thing: no, it doesn’t. You can die without taking the moral law with you.
If you think people matter, they keep mattering after you’re gone. If you think suffering is bad, it remains bad even in the year 3000. You don’t get to mark an expiration date on morality like it’s a jug of milk.
Imagine applying this logic in any other domain:
Or:
The bizarre thing is that the people who say this aren’t usually sociopaths. They’ll donate to help children they’ll never meet, they’ll praise great reformers who died centuries ago—but as soon as you point to future people they’ll never meet, it’s all “Eh, let them fend for themselves.”
It’s time to call this what it is: a lazy, self-exonerating dodge. The moral circle doesn’t collapse when you die. Your concern for the world shouldn’t come with a tombstone-shaped asterisk. The universe will keep running whether or not you’re around to watch, and the future will be inhabited by beings capable of joy and suffering. That is reason enough to care—no matter how many centuries or millennia away they are.
Because, let’s face it, if morality only applies while you’re alive, you’re not really doing ethics. You’re just doing public relations for your lifespan.
Disagree from me. I feel like you haven’t read much BB. These political asides are of a piece with the philosophical jabs and brags he makes in his philosophical essays.
That is true. I have not, nor do I intend to.
That doesn’t actually rebut my observation, unless you are claiming to have seen jibes and sneering as dumb and cliche as those in his writings from before ChatGPT (Nov 2022).
How about the fact that the opinions in the inserted asides are his actual opinions? If they were randomly generated, they wouldn’t be.
I’m not sure I believe they are his ‘actual opinions’, but it doesn’t matter to my points.
No one, particularly me, said they were ‘randomly generated’, so that again does not rebut any of my observations.
I just looked at your other alleged example of AI-generated polemic (from alexkesin.com), and I think evidence is lacking there too. That a link contains a UTM parameter referring to ChatGPT, tells us only that this link was provided in ChatGPT output, it doesn’t tell us that the text around the link was written by ChatGPT as well. As for the article itself, I find nothing in its verbal style that is outside the range of human authorship. I wouldn’t even call it a bad essay, just perhaps dense, colorful, and polemical. People do choose to write this way, because they want to be stylish and vivid.
I hear that the use of emdashes is far more dispositive; as a sign of AI authorship, it’s up there alongside frequent “delving”. But even the emdash has its human fans (e.g. the Chicago Manual of Style). It can be a sign of a cultivated writer, not an AI… Forensic cyber-philology is still an art with a lot of judgment calls.
I note, that aside from bending over backwards to excuse multiple blatant signs of a common phenomena which requires little evidence to promote to high posterior confidence, you still are not responding to what I said about BB and have instead chosen to go off on a tangent.
So again. Are you claiming to have seen jibes and sneering as dumb and cliche as those in his writings from before ChatGPT (Nov 2022), and thus rebutting my observation about how it sounds like he is gracelessly using ChatGPT?
You say in another comment that you’re going around claiming to detect LLM use in many places. I found the reasons that you gave in the case of BB, bizarrely mundane. You linked to another analysis of yours as an example of hidden LLM use, so I went to check it out. You have more evidence in the case of Alex Kesin, *maybe* even a preponderance of evidence. But there really are two hypotheses to consider, even in that case. One is that Kesin is a writer who naturally writes that way, and whose use of ChatGPT is limited to copying links without trimming them. The other is that Kesin’s workflow does include the use of ChatGPT in composition or editing, and that this gave rise to certain telltale stylistic features.
The essay in question (“Don’t Eat Honey”) contains, by my count, two such sneers, one asserting that Donald Trump is stupid, the other asserting that Curtis Yarvin is boring. Do you not think that we could, for example, go back to the corpus of Bush-era American-college-student writings and find similar attacks on Bush administration figures, inserted into essays that are not about politics?
I am a bit worried about how fatally seductive I could find a debate about this topic to be. Clearly LLM use is widespread, and its signs can be subtle. Developing a precise taxonomy of the ways in which LLMs can be part of the writing process; developing a knowledge of “blatant signs” of LLM use and a sense for the subtle signs too; debating whether something is a false positive; learning how to analyze the innumerable aspects of the genuinely human corpus that have a bearing on these probabilistic judgments… It would be empowering to achieve sophistication on this topic, but I don’t know if I can spare the time to achieve that.
It is in fact a mundane topic, because you are surrounded by AI slop and people relying heavily on ChatGPT writing, making it a mundane every day observation infiltrating even the heights of wordcel culture (I’ve now started seeing blatant ChatGPTisms in the New Yorker and New York Times), which is why you are wrong to bend over backwards to require extraordinary evidence for what have become ordinary claims (and also why your tangents and evasions are so striking).
So, I am again going to ignore those, and will ask you again—you were sure that BB was not using ChatGPT, despite the linguistic tells and commonness of it:
I am still waiting for an answer here.
Let me first try to convey how this conversation appears from my perspective. I don’t think I’ve ever debated directly with you about anything, but I have an impression of you as doing solid work in the areas of your interest.
Then, I run across you alleging that BB is using AI to write some of his articles. This catches my attention because I do keep an eye on BB’s work. Furthermore, your reason for supposing that he is using AI seems bizarre to me—you think his (very occasional) “sneering” is too “dumb and cliche” to be the work of human hands. Let’s look at an example:
If that strikes you as something that a human being would never spontaneously write, I don’t know what to say. Surely human comedians say similar things hundreds of times every month? It’s also exactly the kind of thing that a smart-aleck “science communicator” with a progressive audience might say, don’t you think? BB may be neither of those things, but he’s a popular Gen-Z philosophy blogger who was a high-school debater, so he’s not a thousand light-years from either of those universes of discourse.
I had a look at your other alleged example of AI writing, and I also found other callouts from your recent reddit posts. Generally I think your judgments were reasonable, but not in this case.
(I had missed some of this stuff because I skimmed some of the post, which does update me on how bad it was. I think there is basically one interesting claim in the post “bees are actually noticeably more cognitively interesting than you probably thought, and this should have some kind of implication worth thinking about”. I think I find that more valuable than Oliver does, but not very confident about whether “one interesting point among a bunch of really bad argumentation” should be more like −2 to 3 karma or more like −10)