Why should we expect future text generators to be any more dangerous or effective than human-generated propaganda? As advertising has advanced, so have our abilities to resist or avoid it. We mute the television when the commercials come on, teach children to analyze them for the underlying message, create fact-checking services, and so on. It seems likely to me that we will develop anti-textgen technology roughly in sync with the development of text generation itself.
Imagine a future publishing company that put out AI generated nonfiction. It might use one AI to generate the text, another to fact-check, another to provide adversarial takes on the claims in the book. Its book on the Civil War will compete with others written by human experts, and eventually by other companies putting out computer-generated nonfiction.
Certainly we’d expect that the KKK would eventually get its hands on such software and create a revisionist, racist Civil War history. But the reading public will receive it in the context of other histories published by “reputable AI publishing firms” and human experts. I don’t see why this situation is all that different than the one we have today, just with different means of production.
Certainly we’d expect that the KKK would eventually get its hands on such software and create a revisionist, racist Civil War history. But the reading public will receive it in the context of other histories published by “reputable AI publishing firms” and human experts. I don’t see why this situation is all that different than the one we have today, just with different means of production.
Yeah, they already do this so what would change really?
Why should we expect future text generators to be any more dangerous or effective than human-generated propaganda? As advertising has advanced, so have our abilities to resist or avoid it. We mute the television when the commercials come on, teach children to analyze them for the underlying message, create fact-checking services, and so on. It seems likely to me that we will develop anti-textgen technology roughly in sync with the development of text generation itself.
Imagine a future publishing company that put out AI generated nonfiction. It might use one AI to generate the text, another to fact-check, another to provide adversarial takes on the claims in the book. Its book on the Civil War will compete with others written by human experts, and eventually by other companies putting out computer-generated nonfiction.
Certainly we’d expect that the KKK would eventually get its hands on such software and create a revisionist, racist Civil War history. But the reading public will receive it in the context of other histories published by “reputable AI publishing firms” and human experts. I don’t see why this situation is all that different than the one we have today, just with different means of production.
Yeah, they already do this so what would change really?