Antisocial media: AI’s killer app?

Link post

Plans for what to do with artificial general intelligence (“AGI”) have always been ominously vague… “Solve intelligence” and “use [it] to solve everything else” (Google DeepMind). “We’ll ask the AI” (OpenAI).

One money-making idea is starting to crystallize: Replacing your friends with fake AI people who manipulate you and sell you stuff.

Welcome to the world of antisocial media.

The idea is this: where ‘social media’ had a dubious claim to connect you with your friends and loved ones, the new media will connect you to a stream of synthetic social activity and addictive “avatars” (i.e. fake people), more optimized for gripping your attention, engaging your affection… and selling you stuff.

The current crop of technology is basically chatbots that either natively support social and “parasocial” relationships with various AI characters (e.g. Character.AI, Replika, Chai) or are frequently used this way by users (e.g. of ChatGPT). These relationships can be romantic, sexual, pseudo-therapeutic, intensely personal, addictive, etc. Users are hooked.

The newest offering is “social AI video”—instead of people sharing videos of actual people (or cats) doing actual things, just generate videos wholesale using AI. Making such convincing deepfake video clips is now a technical reality, and companies have shifted from viewing fake content as a social menace to the main attraction.

First to enter the fray was Character.AI (with “Feed”), whose earlier AI companion offering prompted the first reported chatbot-encouraged teen suicide. Next was Meta (with “Vibes,” picking up some of what the Metaverse left off). The latest is OpenAI (with “Sora”), whose ChatGPT encouraged a suicidal teen to keep his noose hidden and gave advice on hanging it up. Update: as I am writing this blog post, I have a consultant reaching out to me asking me to do sponsored content for TikTok’s upcoming entrant into the AI video race. Yay.

In my mind, the term “social AI video” is a distraction from where this technology seems to be heading: on-demand AI companions that are 1000x more captivating and compelling than today’s chatbots. Where is this all leading? Human creativity and connection are important, but companies seem to aspire to replace friendship wholesale. Real AI could help them to realize this antisocial vision, and undermine human connection as a meaningful—and politically powerful—part of human experience and society.

Replacing Creatives

Companies are desperately trying to emphasize the “human creativity” angle on these new video offerings. No doubt some people will do amazing, creative things on their platforms. But the long-term game plan for AI companies is clear: replace creatives and take their profit.

Social media companies want to be the middleman in human relationships. AI companies want to do one better and cut out the supplier. Real AI would make it possible to completely automate the jobs of creatives. Today’s AI companies are jostling to be in position to capitalize on that as it happens.

Right now, successful content creators can demand serious compensation from the companies hosting their content. As antisocial media takes off, companies will increasingly nip such talent in the bud, identifying trends and rising stars, and replacing them with their own AI-generated knock-offs. Spotify is already playing this game—replacing human artists with AI-generated music in genres like ambient, classical, etc. in some of its most popular playlists.

Some creators could still make a living by licensing their likeness, so long as they let the antisocial media companies use AI to generate or “co-create”. The music industry has a long history of “manufacturing” pop stars—writing and playing “their” music, choosing “their” fashions and styles, etc. The stars still get a cut of the writing credits, and get to be the face of the enterprise. Everyone is happy… except artists who want to be more than a figurehead, and listeners who are looking for genuine connection with another person’s experience and expression.

Manipulating Users

What about users?

Antisocial media has the same issues as social media: addiction, fragmenting and polarizing society, sending users down rabbit holes of conspiracy theories, etc.

But antisocial media will also allow AI companies to supercharge influence (and charge for it). The move from mass manipulation to personalized persuasion will lead to unprecedented levels of control over users, as well as increasing dependency and other psychological harms, like violent psychotic episodes.

I’ve written about how future AI could “deploy itself” by simulating teams of human experts. Similarly, antisocial media could be like a team of spies, marketers, and designers who optimize every detail of every interaction for maximal impact. The movie The Social Dilemma depicts such a team evocatively. But the AIs at that time were way less smart, and had way less information and fewer tools at their disposal.

OpenAI CEO Sam Altman has promised to “fix it” or “discontinue offering the service” if users don’t “feel that their life is better for using Sora” after 6 months of use—which sounds like the sweet spot between “not yet addicted” and “realizing you have a problem.” But even if users say they are having a bad time, how much will companies really care, if these tools are making them money? And will users say they are having a bad time if they know it means losing access? The idea that Facebook or Twitter would be shut down entirely by their owners out of concern for user’s wellbeing is outlandish. Altman’s assurances here are about as credible as his 2015 promise to “aggressively support all [AI] regulation.”

The long-term threat to human culture and society

When I was a kid, someone told me that many of Isaac Asimov’s stories are set in a world where the ultra-wealthy no longer interact with other humans at all, just robot servants. I’d never bothered to confirm this (it seems it was introduced in Caves of Steel and The Naked Sun), but I found the vision disturbing and dystopian, and it stuck with me.

The way Zuckerberg talks about friendship here is a perfect example of this vision of other people as service providers, who necessarily can be replaced by AIs that provide the services of, e.g. “connectivity and connection” more efficiently and effectively. In this view of the world, people are reduced to a collection of “demands”. And communities are reduced to a set of producer/​consumer relationships.

Zuckerberg wants us to be reassured that “physical connection” won’t be replaced by AI. But we are heading towards a world with real AI and robotics, and these technologies have the potential to bring about a world entirely devoid of human contact. Social norms against replacing human relationships with AI will be strong at first, but companies, AIs, and the market will keep working to wheedle their way in if we don’t stop them.

And this won’t necessarily be optional. If everyone else starts listening to feeds of AIs talking, nobody is listening to you. The replacement of your social feed is also the replacement of your voice in the conversation.

As real human connections are weakened and replaced, we lose our ability to resist broader AI power-grabs. Some people already depend on AI tools for their work. Users despair when chatbots playing romantic characters are suddenly changed or discontinued. AI companies will keep encouraging such trends every chance they get because doing so increases their power. If everyone is surrounded by AI ‘friends’, it will be hard to resist handing over more and more power to AI companies.

The limit of this dehumanizing process is not necessarily just a tech company takeover, but rather a broader destruction of human culture… “cultural disempowerment” as described in our recent paper on gradual disempowerment. As AI is given more and more decision-making power throughout society, human culture could be a bulwark against the excesses of AI-powered companies and governments, which might otherwise pursue profit and security to the point of human extinction. But only if we are still willing and able to resist, rather than being completely enthralled by AI-driven antisocial media.

The irony of antisocial media

AI is brought to us by the same industry—and many of the same companies—responsible for social media. These companies, these people, are not trustworthy.

Given the backlash over social media, it’s surprising that AI companies are still managing to successfully sell society a narrative that “AI has all these immense benefits that we need”. We’re promised a cure for cancer—what we’re getting are fake friends.

Social media was supposed to be this great thing that connected people. Instead it’s driven us apart, gamified our relationships, and commoditized connection. But it could be worse, at least there are real people at the other end. If tech companies really built social media to connect us—rather than monetize our need for connection—they wouldn’t be recklessly inserting AI into our relationships. The entire premise of social media is that it’s social. Antisocial media will do away with this unnecessary detail.

Let me know what you think, and subscribe to receive new posts!