A crisis for online communication: bots and bot users will overrun the Internet?

The endgame for humanity’s AI adventure still looks to me, to be what happens upon the arrival of comprehensively superhuman artificial intelligence.

However, the rise of large language models, and the availability of ChatGPT in particular, has amplified an existing Internet phenomenon, to the point that it may begin to dominate the tone of online interactions.

Various kinds of fakes and deceptions have always been a factor in online life, and before that in face-to-face real life. But first, spammers took advantage of bulk email to send out lies to thousands of people at a time, and then chatbots provided an increasingly sophisticated automated substitute for social interaction itself.

The bot problem is probably already worse than I know. Just today, Elon Musk is promising some kind of purge of bots from Twitter.

But now we have “language models” whose capabilities are growing in dozens of directions at once. They can be useful, they can be charming, but they can also be used to drown us in their chatter, and trick us like an army of trolls.

One endpoint of this is the complete replacement of humanity, online and even in real life, by a population of chattering mannequins. Probably this image exists many places in science fiction: a future Earth on which humanity is extinct, but in which, in what’s left of urban centers running on automated infrastructure, some kind of robots cycle through an imitation of the vanished human life, until the machine stops entirely.

Meanwhile, the simulacrum of online human life is obviously a far more imminent and urgent issue. And for now, the problem is still bots with human botmasters. I haven’t yet heard of a computer virus with an onboard language model that’s just roaming the wild, surviving on the strength of its social skills.

Instead, the problem is going to be trolls using language models, marketers and scammers and spammers using language models, political manipulators and geopolitical subverters using language models: targeting individuals, and then targeting thousands and millions of people, with terrible verisimilitude.

This is not a new fear, but it has a new salience because of the leap in the quality of AI imitation and creativity. It may end up driving people offline in large numbers, and/​or ending anonymity in public social media in favor of verified identities. A significant fraction of the human population may find themselves permanently under the spell of bot operations meant to shape their opinions and empty their pockets.

Some of these issues are also discussed in the recent post, ChatGPT’s Misalignment Isn’t What You Think.