I’m not a native English user, so some of the text I wrote could appear broken or unclear. Apologies for any inconvenience in reading.
AI use is limited to grammar check only, 1 exception is noted in footnotes [2]. The tool used is Claude and Writing Tool[1]. Google Translate is used for single-word and short phrase translation.
Most of your take is established on the fact that AI generated text is not good enough and is easily distinguishable from human written one, and I agree on that, but I have no doubt that someday it will improve to a level where you cannot tell it apart from human written text.
I think we should focus more on whether people should do it even if nobody is going to catch them, it’s about writer ethics, I wouldn’t mind the AI use if an author clearly marked which part is generated or modified by AI and how AI is used. But not being transparent about this is irresponsible to readers.
If you use AI to write something, people will know. Not everyone, but the people paying attention, who aren’t newcomers or distracted or intoxicated. And most of those people will judge you.
The bias problem a lot of people have mentioned. It’s hard to say if your presumptions are correct in the first place, so yes, AI-written text has identifiable patterns, but it is hard to confidently tell that not a line in the articles you think are human-written is actually written by machine. This presumption does feel too absolute. (I saw your reply under another comment that addresses this issue)
Plus, in which context are you talking about writing? Science, literature, news, messaging… the audience expectations of these fields vary a lot. AI-generated text is accepted or rejected for various reasons.
For general long content writing, I think using AI to directly generate text on a topic with vague descriptions can be considered plagiarism in some way, it is unearned authorship[2] as the thought does not come from the human author at all. They should not even be treated as original work.
My opinion is that, in this era where machines are as intelligent as humans in some domains, content consumers should have the right to know what is actually behind the content they are spending time on.
Disclaimer
I’m not a native English user, so some of the text I wrote could appear broken or unclear. Apologies for any inconvenience in reading.
AI use is limited to grammar check only, 1 exception is noted in footnotes [2]. The tool used is Claude and Writing Tool[1].
Google Translate is used for single-word and short phrase translation.
Most of your take is established on the fact that AI generated text is not good enough and is easily distinguishable from human written one, and I agree on that, but I have no doubt that someday it will improve to a level where you cannot tell it apart from human written text.
I think we should focus more on whether people should do it even if nobody is going to catch them, it’s about writer ethics, I wouldn’t mind the AI use if an author clearly marked which part is generated or modified by AI and how AI is used. But not being transparent about this is irresponsible to readers.
The bias problem a lot of people have mentioned. It’s hard to say if your presumptions are correct in the first place, so yes, AI-written text has identifiable patterns, but it is hard to confidently tell that not a line in the articles you think are human-written is actually written by machine. This presumption does feel too absolute. (I saw your reply under another comment that addresses this issue)
Plus, in which context are you talking about writing? Science, literature, news, messaging… the audience expectations of these fields vary a lot. AI-generated text is accepted or rejected for various reasons.
For general long content writing, I think using AI to directly generate text on a topic with vague descriptions can be considered plagiarism in some way, it is unearned authorship[2] as the thought does not come from the human author at all. They should not even be treated as original work.
My opinion is that, in this era where machines are as intelligent as humans in some domains, content consumers should have the right to know what is actually behind the content they are spending time on.
Refers to the writing tool in Apple Intelligence.
This word is provided by Claude.