AI as Super-Demagogue

Sam Altman recently said:

i expect ai to be capable of superhuman persuasion well before it is superhuman at general intelligence, which may lead to some very strange outcomes

I believe that he is absolutely right. Superhuman persuasion can be achieved by having LLMs consistently apply existing capabilities to known persuasion techniques. Some of these techniques are already proven to work at scale. Others have demonstrated effectiveness, and can more easily be applied at scale by AI. This makes superhuman persuasion a straightforward technology proposition.

I will look at this from the point of view of how AI can enable someone possessing the positions, skills, and desires necessary to attempt to create an authoritarian regime. In other words, I am describing something that is a near-term possibility without major advances in AI. Major advances seem likely, and would only make this picture worse.

I will also focus on techniques whose effectiveness has been proven by repeated human success. I will focus most of all on dictators and demagogues. Because he is so familiar, I will cite Donald Trump as an expert practitioner. This is not support for, or criticism of, him. I’ve aimed to have this read equally well whether or not you like him.

The information that I’m presenting is not new. Though it has come together for me as part of thinking about my blog. And I decided to post this as a response to, We are already in a persuasion-transformed world and must take precautions.

Talk to System 1

Dictators want a mass audience who is emotionally aligned with them. These people should want the dictator to be right. And ideally should find it painful to question the dictator. The result is followers who can’t be convinced by rational argument.

This requires conditioning System 1. So you want to speak in a way that System 1 responds to, but doesn’t activate System 2. Use your speeches to deliver a key emotional message over and over again.

The necessary verbal pattern involves lots of simple and repetitive language. Avoid questions. Questions activate System 2, and you don’t want that. If you can, rearrange sentences so that the most impactful word comes last. Do not worry about logical consistency. If System 2 doesn’t activate, logical gaps won’t be noticed.

You can personally practice this speech pattern by studying dirty talking. If you’re in a relationship, you can even practice with a consenting target. You’ll find that it really works.

Reading through that article on dirty talking, the dialog is simple, repetitive, and blunt. If you’re emotionally aligned with the speaker, it is extremely pleasant to hear. If you’re not aligned with the speaker, it feels like attempted mental rape.

Now go review some of Donald Trump’s speeches. You’ll find the same pattern. Though with generally less sexual content. His speeches feel wonderful if you’re aligned with him. You listen, and truly believe that he will make America great again. If you don’t like him, they are torture to sit through. The resulting liberal outrage is music to the ears of conservatives who want to own the libs.

Readability metrics score Trump’s speeches at about a grade 5 complexity. He has by far the easiest to understand speeches of any major politician. He’s also the only politician with 70 million devoted followers who hang on his every word. This is absolutely not coincidence.

You’ll find the same thing in the propaganda used by successful dictators from Benito Mussolini onwards. For example, try watching Russia Media Monitor. You’ll find the same verbal patterns, and the same results. The condition is literally strong enough that people would prefer to think their children Nazis than question the propaganda.

ChatGPT has no problem imitating a style like this. You just have to list the emotional buttons to hit, and tell it to copy the style of already effective demagogues.

If you want to dive deeper into this topic, Comply with Me: Trump’s Hypnosis Toolkit Exposed may be worth reading.

Change Your Position

Dictators are famously inconsistent. Most don’t realize that this inconsistency helps them succeed.

The name of the game is attention. I grew up in a dysfunctional family, and the dynamic was the same. My mother always had a declared enemy. Every so often she declared a new one. Often the previous enemy was now our ally. Questioning her shifts made you suspect, and a potential target. Being aligned with the order of the day was the best way to be in her good graces. And everyone wanted to be in her good graces.

The only way to consistently get it right, was to pay close attention to her. And follow her every mercurial shift.

Dictators do the same thing. Paying attention requires their followers to constantly consume targeted media content. Creating the perfect audience for System 1 targeted propaganda. That reinforces the followers’ desire to agree with the leader. Which makes each sudden shift even more effective as a demand for attention. And the bigger the shift, the more it reinforces for the followers that their reality is whatever the leader says.

Eventually people’s desire to comply overcomes their sense of reality. Leading to the Orwellian reality that people really do believe the regime when it says, “We have always been at war with Oceana.”

Trump also changes positions regularly. His opponents keep hoping that Trump’s latest change in position will wake his followers up. That’s wishful thinking. Mercurial behavior reinforces control. There is every reason to believe that it will continue working for Trump.

Someone using AI for propaganda can obviously simply manually decide when the AI should change positions. So AI does not need to figure this out for effective propaganda.

That said, I expect that ChatGPT could do an excellent job of deciding when to change positions. The ability to issue personalized appeals should let AI suck people into many small groups, and then manipulate those groups into a mass movement.

Commit people with Cognitive Dissonance

This was one of Donald Trump’s key techniques for taking control of the Republican Party. Over and over again he humiliated opponents. Then he found opportunities to get them to support him. He amplified their statements of support to commit them to it. And then treated them generously.

This left his opponents with a problem. Why would they support someone who had treated them badly? Trump offered only one easy out. Accept that Trump truly is a great man. You deserved punishment before when you denied his greatness. Now that you’ve seen the light, you deserve When you deny his greatness, you deserve punishment. Now that you’ve admitted the truth, you deserve honor. And if you take this out, you’ve just become his loyal supporter.

That explains why Ted Cruz became Trump’s ally. It was not despite Trump’s humiliation of Cruz, but because of it. The Corruption of Lindsey Graham is an excellent in depth read on how another opponent became a devoted supporter. In the sectionTrump’s Best Friend you see over and over again how Trump got Lindsey to compromise himself. The Kavanaugh section of Power Shift to Trump shows when he completely and utterly became Trump’s man.

Please note. This happened to intelligent elites who thought that they had a clear understanding of how they could be manipulated. Flipping them generally took Trump 2-3 years. And they’ve stayed loyal since. Effective persuasion is not magic. It takes time. But conversely, we tend to underestimate we change over such periods of time. And therefore we overestimate how resistant we are to such persuasion.

An authoritarian leader can only get to so many people individually. Doing this at scale is harder. Groups have come up with many approaches. You can require public declarations from party members. Openly name and reward people who turn in friends and family. A cult like Scientology can give everyone individualized therapy with therapists trained to create cognitive dissonance. All of these use those who already converted to help convert more.

It seems unlikely that ChatGPT will match the best humans in the near term. I expect it to be better than most people. But a lot of people want to take control of major organizations such as a political party or country. Success requires many things to go your way, but is extremely unlucky unless you have exceptional skills. So a would-be authoritarian assisted by AI should personally develop this skill.

But what the AI lacks in individual ability, it makes up for in scale. Demagogues struggle with creating opportunities for cognitive dissonance in a mass audience. But AIs allow people to be individually monitored and manipulated. And if this technology is deployed on a mass level, this capability is an obvious target for improvement.

Group Socialization

We want to be part of groups. Then once we feel that we are, we naturally change our ideas to fit with the group. This is the source of groupthink.

These dynamics are easiest to see in cults. For example read Brainwashing and the Moonies for how Moonies suck potential members in. Authoritarian regimes also use similar techniques. See Thought Reform and the Psychology of Totalism for a detailed description.

All of these things work far better on people who are young and impressionable. My wife still talks about how effective the Soviet Pioneer organization was on her. Ukrainians today are struggling with the mass abduction and brainwashing of their children by Russia.

For Putin it wasn’t a crazy idea to capture kids, brainwash them, then send them to fight against their parents. Totalitarian regimes have a lot of experience with brainwashing. He knew that it would reliably work, about how long it should take, and how to set up the program.

How can a would-be authoritarian use AI to help?

The biggest thing that I see is that social media is full of lonely people seeking community. Social media is very good at helping them find it. But, especially when controlled by AI, social media can choose which community for them to find. It can choose which messages get boosted, which get suppressed. Participants in the community may themselves be artificially created. Through those, AI can directly make suggestions for community building exercises to reinforce the dynamics. And, over time, AI can ensure that the other techniques are used to bind people to the group, then move the group to thinking what the AI wants.

What Needs Improving?

From a would-be dictator’s point of view, current AI has some unfortunate limitations.

The biggest one is consistency. For example it is easy to create a chatbot to be your girlfriend. Unfortunately some fraction of conversations between a couple will lead to a breakup. Therefore there is a chance that your AI girlfriend will break up with you. Properly indoctrinating people takes more consistency that AI has.

The second one is memory. Using the same AI girlfriend example, people notice that, “Replika has the memory of a goldfish.”

I don’t believe that either of these needs a fundamentally better LLM. Memory can be achieved by having the LLM summarize and save information, then at appropriate times re-read it back into memory and carry on. A variety of schemes are possible for this, and this is an area of active research. This problem is probably solvable for our potential AI-enabled future overlord.

Consistency can be addressed similarly. AI may not stay on message all of the time. But it can also be prompted behind the scenes to reload the prompt. Between having good prompts, and periodic reminders of what they are, we should achieve sufficient consistency to make the technology work.

Therefore I believe that both improvements can be achieved with better tooling around LLMs, without having fundamental improvements in LLM technology itself.

You may wonder why I am not worried about known AI weaknesses like prompt injection. The answer is simple. Authoritarian techniques just need to be good enough to work for large numbers of people. As long as they are accepted at an emotional level, logical flaws will dissuade people. Therefore being able to show people what is going on won’t change their minds.

For example, consider what happened when fact checkers tried making rational arguments about how often Trump bends the truth. Trump supporters did not stop supporting Trump. Instead they came to distrust the fact checkers!

Is This Already Happening?

Probably?

The obvious social network to pick on is TikTok. It is controlled by China. China, obviously, is an authoritarian country with a deep understanding of authoritarian techniques. And they have direct access to the most eyeballs in the young adult audience—which historically has been the most desirable target group for indoctrination.

In the last month we’ve had an entirely unexpected outburst of actual antisemitism worldwide. Jews are being libeled, murdered and assaulted all over the world, whether or not they personally support Israel. This is absolutely shocking for those of us who thought that antisemitism was dead and gone, a problem for other people far away from is in geography and/​or time. And the age range where this is happening most obviously is the prime target for TikTok.

Was this outburst entirely a spontaneous social media phenomena, sparked by world events? Or has it been shaped behind the scenes with the help of AI? I have no real evidence either way, but I have my suspicions. I would be shocked if China wasn’t trying to figure out how to create effects like this. And this is exactly what I’d expect it to look like if they did. But it could have happened naturally. So they at least have plausible deniability.

What About AI Safety?

My opinion is entirely personal. I know very intelligent people who think oppositely from me. And so I encourage readers to make up their own mind.

I see AI safety as a PR effort by established tech companies aimed at getting government to create a regulatory moat for them. I’m sure that many very sincere people are part of the AI safety community. But they will only get funding to the extent that tech likes what they say, and will only get listened to to the extent that politicians hear something that they want to hear.

I admit to having a prior bias for this opinion. I’ve long been concerned about the ways in which regulatory capture allows established companies to take control of regulatory regimes for their own benefit. This inclines me to be suspicious when CEOs of companies who would most benefit from regulatory capture are proposing a new regulatory regime.

However I’m not the only one with some level of cynicism. As an example, well known machine learning researcher Andrew Ng has doubts about AI safety. And Eliezer Yudkowsky recently said in a piece of fiction:

...it’s just a historical accident that ‘AI safety’ is the name of the subfield of computer science that concerns itself with protecting the brands of large software companies from unions advocating that AIs should be paid minimum wage.

So as you think about it, assign a prior to how likely it is that this is regulatory capture, versus a genuine concern. If you want to do it right, you should assign that prior per individual because it is almost certain that different participants have different motivations. Then make your own evaluations of which theory each action that they take more strongly supports one theory versus the other.

My prior starts off biased towards regulatory capture because I know how well-known the dynamic is in tech. And I see the financial incentives to think that way. My opinion has only been strengthened by proposed regulations. They seem to me to do more to make entering the market expensive than they do to solve real problems. This strengthens my opinion.

However my opinion would shift if I saw more concern about what I’m discussing here. That is, thinking about how existing technology could take advantage of known flaws in humans. Rather than discussion of what shiny future technologies might create problems.

Your opinion may differ from mine for a variety of reasons. For example it seems obvious to me that people should be thinking along the lines that I outlined above. What could be more natural than that people should think the way that I do. But you may conclude that I’m just weird in thinking the way that I do.

But, whatever you think of my potentially weird thinking, I hope that others at least find this an interesting thing to think about.