Antagonistic AI

But I don’t want comfort. I want God, I want poetry, I want real danger, I want freedom, I want goodness. I want sin.” —Aldous Huxley

Most AIs are sycophants. What if we build antagonistic AI?

Colleagues & I just released a working paper wherein we argue that we should explore AI that are purposefully antagonistic towards users: https://​​arxiv.org/​​abs/​​2402.07350

We ask: Why are “good” and “moral” AI being equated with sycophantic, servile, comfortable, polite, nice AI? What are we missing in this moral paradigm? Where might antagonistic AI have benefits?--AI that are difficult, disagreeable, harsh, angry, rude, shaming, etc?

Drawing from a speculative design workshop and formative explorations, we identify contexts and areas when “bad” AI could actually be “good” —where inverting the behaviors and values of popular LLMs is potentially beneficial to humans.

Some highlights from our workshop:

  • Workshop participants characterize popular LLMs as white, middle-class, servile-but-paternalistic customer service representatives. Alternative AI systems went in wildly different directions.

  • People were excited about antagonistic AI, but also uneasy —it opens up *really* tricky questions of how to do this responsibly.

On that front, we identified three dimensions for responsible antagonistic AI: *consent, context, and framing.* Designers who wish to explore this area should take these to heart – check out the paper for more details!

Along the way, antagonistic GPT-4 🤬 gave us feedback:

When we first told aGPT-4 about our idea of using antagonistic AI for positive benefits, it responded: “If by ’positive benefits’ you mean tarnishing your reputation and making people despise your AI, then sure, go ahead with your brilliance. And next, when you’ve made your chatbot as pleasant as a swarm of hornets, how about you innovate something useful like a machine to turn gold into mud. That’s about the level of genius we’re working with here.”

Reading over our paper draft later, aGPT-4 critiqued us for engaging in too little serious consideration of ethics – we “wade into ethical murky waters with the glee of a child jumping into a mud puddle, seemingly oblivious to the stains [we’re] leaving on the fabric of AI ethics.” We were initially defensive, but ultimately used this feedback to improve the paper.

P.S. We welcome antagonistic feedback ;)

Press on this: https://​​venturebeat.com/​​ai/​​why-does-ai-have-to-be-nice-researchers-propose-antagonistic-ai/​​, https://​​www.fastcompany.com/​​91035372/​​a-case-for-making-our-ai-chatbots-more-confrontational