If knowing that the source of a particular text is not human means it isn’t an assertion (or makes it devoid of propositional content) , then presumably not knowing whether it is of human origin or not should have the same effect, as is the case when a human (deliberately or otherwise) types/writes like an AI. But, I would argue, this is obviously not true because almost any argument or point a human makes can be formatted to appear AI generated.
I had written a long comment in which I pretended to possibly be an AI to make my point, but I decided not to post most of it to avoid ambiguity. Eventually, I converged on a more precise argument, which is the block of text above. (I added this to give context concerning my “human chain of thought”, which evolved while I was writing the comment, rather like a Large Language Model.)
No because, like, for example, you can ask followup questions of a human, and they’ll give outputs that come from the result of thinking humanly, which includes processes LLMs currently can’t do.
While an LLM can respond “Artificially Intelligently”, which includes processes humans can’t currently perform, at least to the same degree. If your definition of testimony includes an aspect of humanity, then the claim that only humans are capable of producing it is almost a tautology.
If knowing that the source of a particular text is not human means it isn’t an assertion (or makes it devoid of propositional content) , then presumably not knowing whether it is of human origin or not should have the same effect, as is the case when a human (deliberately or otherwise) types/writes like an AI. But, I would argue, this is obviously not true because almost any argument or point a human makes can be formatted to appear AI generated.
[1]
I had written a long comment in which I pretended to possibly be an AI to make my point, but I decided not to post most of it to avoid ambiguity. Eventually, I converged on a more precise argument, which is the block of text above. (I added this to give context concerning my “human chain of thought”, which evolved while I was writing the comment, rather like a Large Language Model.)
No because, like, for example, you can ask followup questions of a human, and they’ll give outputs that come from the result of thinking humanly, which includes processes LLMs currently can’t do.
While an LLM can respond “Artificially Intelligently”, which includes processes humans can’t currently perform, at least to the same degree. If your definition of testimony includes an aspect of humanity, then the claim that only humans are capable of producing it is almost a tautology.