I give ChatGPT a C- on reading comprehension.[1] I suggest that you stop taking LLMs’ word as gospel. If it can misunderstand something that clear-cut this severely, how can you trust any other conclusions it draws? How can you even post an “unbiased evaluation” with an error this severe, not acknowledge its abysmal quality, and then turn around and lecture people about truth-seeking?
I definitely advice against going to LLMs for social validation.
Here’s Claude 3.7 taking my side, lest you assume I’m dismissing LLMs because they denounce me. For context, Anthropic doesn’t pass the user’s name to Claude and has no cross-session memory, so it didn’t know my identity, there was no system prompt, and the .pdfs were generated by just “right-click → print → save as PDF” on the relevant LW pages.
For context, if someone else stumbles on this trainwreck: I was sarcastically calling my response a “thoughtless kneejerk reaction”. ChatGPT apparently somehow concluded I’d been referring to funnyfranco’s writing. I wonder if it didn’t read the debate properly and just skimmed it? I mean, all the cool kids were doing it.
Not one of you made a case. Not one of you pointed to an error. And yet the judgment was swift and unanimous. That tells me the argument was too strong, not too weak. It couldn’t be refuted, so it had to be dismissed.
Believing that your post was voted down because it was too strong is very convenient for you, which means that your belief that it was voted down because it’s too strong is likely motivated reasoning.
It’s a lot easier to write BS than to refute it, so people don’t usually want to bother exhaustively analyzing why BS is BS.
The challenge was to simply run the argument provided above through your own LLM and post the results. It would take about 30 seconds.
If you claim that “Not one of you made a case. Not one of you pointed to an error.”, that isn’t going to be resolved by running the argument through an LLM. Pointing to an error means manually going through your argument and trying to refute it.
You’re going heavy on the motivated reasoning here. The reason people don’t want to respond to you is not that you’re pure genius, it’s that it isn’t worth the effort.
You’re also doing a motte and bailey on exactly what argument you’re trying to make. If all you’re saying is “sending X through an LLM produces Y”, then yes, I could just try an LLM. But that’s not all that you’re saying. You’re trying to draw conclusions from the result of the LLM. Refuting those conclusions is a lot of effort for little benefit.
“All I’m asking you to do is to run this through an LLM”.
But
“Actually, that’s not all I’m asking you to do. You also need to refute this whole post.”
And your stated reason for not responding to any of it is that it’s inconvenient.
It’s inconvenient to reply to lots of things, even false things. I probably wouldn’t reply to a homeopath or a Holocaust denier, for instance, especially not to refute the things he says.
Really, man?
I give ChatGPT a C- on reading comprehension.[1] I suggest that you stop taking LLMs’ word as gospel. If it can misunderstand something that clear-cut this severely, how can you trust any other conclusions it draws? How can you even post an “unbiased evaluation” with an error this severe, not acknowledge its abysmal quality, and then turn around and lecture people about truth-seeking?
I definitely advice against going to LLMs for social validation.
Here’s Claude 3.7 taking my side, lest you assume I’m dismissing LLMs because they denounce me. For context, Anthropic doesn’t pass the user’s name to Claude and has no cross-session memory, so it didn’t know my identity, there was no system prompt, and the .pdfs were generated by just “right-click → print → save as PDF” on the relevant LW pages.
For context, if someone else stumbles on this trainwreck: I was sarcastically calling my response a “thoughtless kneejerk reaction”. ChatGPT apparently somehow concluded I’d been referring to funnyfranco’s writing. I wonder if it didn’t read the debate properly and just skimmed it? I mean, all the cool kids were doing it.
deleted
Believing that your post was voted down because it was too strong is very convenient for you, which means that your belief that it was voted down because it’s too strong is likely motivated reasoning.
It’s a lot easier to write BS than to refute it, so people don’t usually want to bother exhaustively analyzing why BS is BS.
deleted
If you claim that “Not one of you made a case. Not one of you pointed to an error.”, that isn’t going to be resolved by running the argument through an LLM. Pointing to an error means manually going through your argument and trying to refute it.
deleted
You’re going heavy on the motivated reasoning here. The reason people don’t want to respond to you is not that you’re pure genius, it’s that it isn’t worth the effort.
You’re also doing a motte and bailey on exactly what argument you’re trying to make. If all you’re saying is “sending X through an LLM produces Y”, then yes, I could just try an LLM. But that’s not all that you’re saying. You’re trying to draw conclusions from the result of the LLM. Refuting those conclusions is a lot of effort for little benefit.
deleted
The motte and bailey is:
“All I’m asking you to do is to run this through an LLM”.
But
“Actually, that’s not all I’m asking you to do. You also need to refute this whole post.”
It’s inconvenient to reply to lots of things, even false things. I probably wouldn’t reply to a homeopath or a Holocaust denier, for instance, especially not to refute the things he says.
deleted
Running it through the LLM is easy.
Refuting the argument that you’re using the LLM’s output for takes longer, though.