I agree with some direction of this, but it seems to massively depend on the process by which the LLM text has reached your eyes.
At one extreme, a bot on social media, given some basic prompt and programmed to reply to random tweets, has basically zero content about the “mental elements” behind it, as you put it.
On the other, if someone writes “I asked an LLM to summarize this document, and upon closely reviewing it, I think it did a great job,” this has lots of content about a human’s mental elements. The human’s caption is obviously testimony, but the quoted LLM text also seems pretty much like testimony to me.
(There are plenty of intermediate cases, e.g. someone writes “I asked an LLM to summarize this document, which I personally skimmed, and it seems roughly right to me but caveat lector.”)
I agree with some direction of this, but it seems to massively depend on the process by which the LLM text has reached your eyes.
At one extreme, a bot on social media, given some basic prompt and programmed to reply to random tweets, has basically zero content about the “mental elements” behind it, as you put it.
On the other, if someone writes “I asked an LLM to summarize this document, and upon closely reviewing it, I think it did a great job,” this has lots of content about a human’s mental elements. The human’s caption is obviously testimony, but the quoted LLM text also seems pretty much like testimony to me.
(There are plenty of intermediate cases, e.g. someone writes “I asked an LLM to summarize this document, which I personally skimmed, and it seems roughly right to me but caveat lector.”)
As I wrote, if you actually carefully review it, you will end up changing a lot of it.