Persuasion Tools: AI takeover without AGI or agency?
[epistemic status: speculation]
I’m envisioning that in the future there will also be systems where you can input any conclusion that you want to argue (including moral conclusions) and the target audience, and the system will give you the most convincing arguments for it. At that point people won’t be able to participate in any online (or offline for that matter) discussions without risking their object-level values being hijacked.
--Wei Dai
What if most people already live in that world? A world in which taking arguments at face value is not a capacity-enhancing tool, but a security vulnerability? Without trusted filters, would they not dismiss highfalutin arguments out of hand, and focus on whether the person making the argument seems friendly, or unfriendly, using hard to fake group-affiliation signals?
--Benquo
1. AI-powered memetic warfare makes all humans effectively insane.
--Wei Dai, listing nonstandard AI doom scenarios
This post speculates about persuasion tools—how likely they are to get better in the future relative to countermeasures, what the effects of this might be, and what implications there are for what we should do now.
To avert eye-rolls, let me say up front that I don’t think the world is likely to be driven insane by AI-powered memetic warfare. I think progress in persuasion tools will probably be gradual and slow, and defenses will improve too, resulting in an overall shift in the balance that isn’t huge: a deterioration of collective epistemology, but not a massive one. However, (a) I haven’t yet ruled out more extreme scenarios, especially during a slow takeoff, and (b) even small, gradual deteriorations are important to know about. Such a deterioration would make it harder for society to notice and solve AI safety and governance problems, because it is worse at noticing and solving problems in general. Such a deterioration could also be a risk factor for world war three, revolutions, sectarian conflict, terrorism, and the like. Moreover, such a deterioration could happen locally, in our community or in the communities we are trying to influence, and that would be almost as bad. Since the date of AI takeover is not the day the AI takes over, but the point it’s too late to reduce AI risk, these things basically shorten timelines.
Six examples of persuasion tools
Analyzers: Political campaigns and advertisers already use focus groups, A/B testing, demographic data analysis, etc. to craft and target their propaganda. Imagine a world where this sort of analysis gets better and better, and is used to guide the creation and dissemination of many more types of content.
Feeders: Most humans already get their news from various “feeds” of daily information, controlled by recommendation algorithms. Even worse, people’s ability to seek out new information and find answers to questions is also to some extent controlled by recommendation algorithms: Google Search, for example. There’s a lot of talk these days about fake news and conspiracy theories, but I’m pretty sure that selective/biased reporting is a much bigger problem.
Chatbot: Thanks to recent advancements in language modeling (e.g. GPT-3) chatbots might become actually good. It’s easy to imagine chatbots with millions of daily users continually optimized to maximize user engagement—see e.g. Xiaoice. The systems could then be retrained to persuade people of things, e.g. that certain conspiracy theories are false, that certain governments are good, that certain ideologies are true. Perhaps no one would do this, but I’m not optimistic.
Coach: A cross between a chatbot, a feeder, and an analyzer. It doesn’t talk to the target on its own, but you give it access to the conversation history and everything you know about the target and it coaches you on how to persuade them of whatever it is you want to persuade them of. [EDIT 5/21/2021: For a real-world example (and worrying precedent!) of this, see the NYT’s getting-people-to-vaccinate persuasion tool, and this related research]
Drugs: There are rumors of drugs that make people more suggestible, like scopolomine. Even if these rumors are false, it’s not hard to imagine new drugs being invented that have a similar effect, at least to some extent. (Alcohol, for example, seems to lower inhibitions. Other drugs make people more creative, etc.) Perhaps these drugs by themselves would be not enough, but would work in combination with a Coach or Chatbot. (You meet target for dinner, and slip some drug into their drink. It is mild enough that they don’t notice anything, but it primes them to be more susceptible to the ask you’ve been coached to make.)
Imperius Curse: These are a kind of adversarial example that gets the target to agree to an ask (or even switch sides in a conflict!), or adopt a belief (or even an entire ideology!). Presumably they wouldn’t work against humans, but they might work against AIs, especially if meme theory applies to AIs as it does to humans. The reason this would work better against AIs than against humans is that you can steal a copy of the AI and then use massive amounts of compute to experiment on it, finding exactly the sequence of inputs that maximizes the probability that it’ll do what you want.
We might get powerful persuasion tools prior to AGI
The first thing to point out is that many of these kinds of persuasion tools already exist in some form or another. And they’ve been getting better over the years, as technology advances. Defenses against them have been getting better too. It’s unclear whether the balance has shifted to favor these tools, or their defenses, over time. However, I think we have reason to think that the balance may shift heavily in favor of persuasion tools, prior to the advent of other kinds of transformative AI. The main reason is that progress in persuasion tools is connected to progress in Big Data and AI, and we are currently living through a period of rapid progress those things, and probably progress will continue to be rapid (and possibly accelerate) prior to AGI.
However, here are some more specific reasons to think persuasion tools may become relatively more powerful:
Substantial prior: Shifts in the balance between things happen all the time. For example, the balance between weapons and armor has oscillated at least a few times over the centuries. Arguably persuasion tools got relatively more powerful with the invention of the printing press, and again with radio, and now again with the internet and Big Data. Some have suggested that the printing press helped cause religious wars in Europe, and that radio assisted the violent totalitarian ideologies of the early twentieth century.
Consistent with recent evidence: A shift in this direction is consistent with the societal changes we’ve seen in recent years. The internet has brought with it many inventions that improve collective epistemology, e.g. google search, Wikipedia, the ability of communities to create forums… Yet on balance it seems to me that collective epistemology has deteriorated in the last decade or so.
Lots of room for growth: I’d guess that there is lots of “room for growth” in persuasive ability. There are many kinds of persuasion strategy that are tricky to use successfully. Like a complex engine design compared to a simple one, these strategies might work well, but only if you have enough data and time to refine them and find the specific version that works at all, on your specific target. Humans never have that data and time, but AI+Big Data does, since it has access to millions of conversations with similar targets. Persuasion tools will be able to say things like “In 90% of cases where targets in this specific demographic are prompted to consider and then reject the simulation argument, and then challenged to justify their prejudice against machine consciousness, the target gets flustered and confused. Then, if we make empathetic noises and change the subject again, 50% of the time the subject subconsciously changes their mind so that when next week we present our argument for machine rights they go along with it, compared to 10% baseline probability.”
Plausibly pre-AGI: Persuasion is not an AGI-complete problem. Most of the types of persuasion tools mentioned above already exist, in weak form, and there’s no reason to think they can’t gradually get better well before AGI. So even if they won’t improve much in the near future, plausibly they’ll improve a lot by the time things get really intense.
Language modelling progress: Persuasion tools seem to be especially benefitted by progress in language modelling, and language modelling seems to be making even more progress than the rest of AI these days.
More things can be measured: Thanks to said progress, we now have the ability to cheaply measure nuanced things like user ideology, enabling us to train systems towards those objectives.
Chatbots & Coaches: Thanks to said progress, we might see some halfway-decent chatbots prior to AGI. Thus an entire category of persuasion tool that hasn’t existed before might come to exist in the future. Chatbots too stupid to make good conversation partners might still make good coaches, by helping the user predict the target’s reactions and suggesting possible things to say.
Minor improvements still important: Persuasion doesn’t have to be perfect to radically change the world. An analyzer that helps your memes have a 10% higher replication rate is a big deal; a coach that makes your asks 30% more likely to succeed is a big deal.
Faster feedback: One way defenses against persuasion tools have strengthened is that people have grown wise to them. However, the sorts of persuasion tools I’m talking about seem to have significantly faster feedback loops than the propagandists of old; they can learn constantly, from the entire population, whereas past propagandists (if they were learning at all, as opposed to evolving) relied on noisier, more delayed signals.
Overhang: Finding persuasion drugs is costly, immoral, and not guaranteed to succeed. Perhaps this explains why it hasn’t been attempted outside a few cases like MKULTRA. But as technology advances, the cost goes down and the probability of success goes up, making it more likely that someone will attempt it, and giving them an “overhang” with which to achieve rapid progress if they do. (I hear that there are now multiple startups built around using AI for drug discovery, by the way.) A similar argument might hold for persuasion tools more generally: We might be in a “persuasion tool overhang” in which they have not been developed for ethical and riskiness reasons, but at some point the price and riskiness drops low enough that someone does it, and then that triggers a cascade of more and richer people building better and better versions.
Speculation about effects of powerful persuasion tools
Here are some hasty speculations, beginning with the most important one:
Ideologies & the biosphere analogy:
The world is, and has been for centuries, a memetic warzone. The main factions in the war are ideologies, broadly construed. It seems likely to me that some of these ideologies will use persuasion tools—both on their hosts, to fortify them against rival ideologies, and on others, to spread the ideology.
Consider the memetic ecosystem—all the memes replicating and evolving across the planet. Like the biological ecosystem, some memes are adapted to, and confined to, particular niches, while other memes are widespread. Some memes are in the process of gradually going extinct, while others are expanding their territory. Many exist in some sort of equilibrium, at least for now, until the climate changes. What will be the effect of persuasion tools on the memetic ecosystem?
For ideologies at least, the effects seem straightforward: The ideologies will become stronger, harder to eradicate from hosts and better at spreading to new hosts. If all ideologies got access to equally powerful persuasion tools, perhaps the overall balance of power across the ecosystem would not change, but realistically the tools will be unevenly distributed. The likely result is a rapid transition to a world with fewer, more powerful ideologies. They might be more internally unified, as well, having fewer spin-offs and schisms due to the centralized control and standardization imposed by the persuasion tools. An additional force pushing in this direction is that ideologies that are bigger are likely to have more money and data with which to make better persuasion tools, and the tools themselves will get better the more they are used.
Recall the quotes I led with:
… At that point people won’t be able to participate in any online (or offline for that matter) discussions without risking their object-level values being hijacked.
--Wei Dai
What if most people already live in that world? A world in which taking arguments at face value is not a capacity-enhancing tool, but a security vulnerability? Without trusted filters, would they not dismiss highfalutin arguments out of hand … ?
--Benquo
1. AI-powered memetic warfare makes all humans effectively insane.
--Wei Dai, listing nonstandard AI doom scenarios
I think the case can be made that we already live in this world to some extent, and have for millenia. But if persuasion tools get better relative to countermeasures, the world will be more like this.
This seems to me to be an existential risk factor. It’s also a risk factor for lots of other things, for that matter. Ideological strife can get pretty nasty (e.g. religious wars, gulags, genocides, totalitarianism), and even when it doesn’t, it still often gums things up (e.g. suppression of science, zero-sum mentality preventing win-win-solutions, virtue signalling death spirals, refusal to compromise). This is bad enough already, but it’s doubly bad when it comes at a moment in history where big new collective action problems need to be recognized and solved.
Obvious uses: Advertising, scams, propaganda by authoritarian regimes, etc. will improve. This means more money and power to those who control the persuasion tools. Maybe another important implication would be that democracies would have a major disadvantage on the world stage compared to totalitarian autocracies. One of many reasons for this is that scissor statements and other divisiveness-sowing tactics may not technically count as persuasion tools but they would probably get more powerful in tandem.
Will the truth rise to the top: Optimistically, one might hope that widespread use of more powerful persuasion tools will be a good thing, because it might create an environment in which the truth “rises to the top” more easily. For example, if every side of a debate has access to powerful argument-making software, maybe the side that wins is more likely to be the side that’s actually correct. I think this is a possibility but I do not think it is probable. After all, it doesn’t seem to be what’s happened in the last two decades or so of widespread internet use, big data, AI, etc. Perhaps, however, we can make it true for some domains at least, by setting the rules of the debate.
Data hoarding: A community’s data (chat logs, email threads, demographics, etc.) may become even more valuable. It can be used by the community to optimize their inward-targeted persuasion, improving group loyalty and cohesion. It can be used against the community if someone else gets access to it. This goes for individuals as well as communities.
Chatbot social hacking viruses: Social hacking is surprisingly effective. The classic example is calling someone pretending to be someone else and getting them to do something or reveal sensitive information. Phishing is like this, only much cheaper (because automated) and much less effective. I can imagine a virus that is close to as good as a real human at social hacking while being much cheaper and able to scale rapidly and indefinitely as it acquires more compute and data. In fact, a virus like this could be made with GPT-3 right now, using prompt programming and “mothership” servers to run the model. (The prompts would evolve to match the local environment being hacked.) Whether GPT-3 is smart enough for it to be effective remains to be seen.
Implications
I doubt that persuasion tools will improve discontinuously, and I doubt that they’ll improve massively. But minor and gradual improvements matter too.
Of course, influence over the future might not disappear all on one day; maybe there’ll be a gradual loss of control over several years. For that matter, maybe this gradual loss of control began years ago and continues now…
I think this is potentially (5% credence) the new Cause X, more important than (traditional) AI alignment even. It probably isn’t. But I think someone should look into it at least, more thoroughly than I have.
To be clear, I don’t think it’s likely that we can do much to prevent this stuff from happening. There are already lots of people raising the alarm about filter bubbles, recommendation algorithms, etc. so maybe it’s not super neglected and maybe our influence over it is small. However, at the very least, it’s important for us to know how likely it is to happen, and when, because it helps us prepare. For example, if we think that collective epistemology will have deteriorated significantly by the time crazy AI stuff starts happening, that influences what sorts of AI policy strategies we pursue.
Note that if you disagree with me about the extreme importance of AI alignment, or if you think AI timelines are longer than mine, or if you think fast takeoff is less likely than I do, you should all else equal be more enthusiastic about investigating persuasion tools than I am.
Thanks to Katja Grace, Emery Cooper, Richard Ngo, and Ben Goldhaber for feedback on a draft. This research was conducted at the Center on Long-Term Risk and the Polaris Research Institute.
Related previous work:
Stuff I’d read if I was investigating this in more depth:
Not Born Yesterday
The stuff here and here
EDIT: This ultrashort sci-fi story by Jack Clark illustrates some of the ideas in this post:
The Narrative Control Department
[A beautiful house in South West London, 2030]
“General, we’re seeing an uptick in memes that contradict our official messaging around Rule 470.” “What do you suggest we do?”
“Start a conflict. At least three sides. Make sure no one side wins.”
“At once, General.”
And with that, the machines spun up – literally. They turned on new computers and their fans revved up. People with tattoos of skeletons at keyboards high-fived eachother. The servers warmed up and started to churn out their fake text messages and synthetic memes, to be handed off to the ‘insertion team’ who would pass the data into a few thousand sock puppet accounts, which would start the fight.
Hours later, the General asked for a report.
“We’ve detected a meaningful rise in inter-faction conflict and we’ve successfully moved the discussion from Rule 470 to a parallel argument about the larger rulemaking process.”
“Excellent. And what about our rivals?”
“We’ve detected a few Russian and Chinese account networks, but they’re staying quiet for now. If they’re mentioning anything at all, it’s in line with our narrative. They’re saving the IDs for another day, I think.”
That night, the General got home around 8pm, and at the dinner table his teenage girls talked about their day.
“Do you know how these laws get made?” the older teenager said. “It’s crazy. I was reading about it online after the 470 blowup. I just don’t know if I trust it.”
“Trust the laws that gave Dad his job? I don’t think so!” said the other teenager.
They laughed, as did the General’s wife. The General stared at the peas on his plate and stuck his fork into the middle of them, scattering so many little green spheres around his plate.
EDIT: Finally, if you haven’t yet, you should read this report of a replication of the AI Box Experiment.
- What 2026 looks like by 6 Aug 2021 16:14 UTC; 505 points) (
- Birds, Brains, Planes, and AI: Against Appeals to the Complexity/Mysteriousness/Efficiency of the Brain by 18 Jan 2021 12:08 UTC; 194 points) (
- Against GDP as a metric for timelines and takeoff speeds by 29 Dec 2020 17:42 UTC; 140 points) (
- Less Realistic Tales of Doom by 6 May 2021 23:01 UTC; 113 points) (
- Voting Results for the 2020 Review by 2 Feb 2022 18:37 UTC; 108 points) (
- Center on Long-Term Risk: 2021 Plans & 2020 Review by 8 Dec 2020 13:39 UTC; 87 points) (EA Forum;
- Long Reflection Reading List by 24 Mar 2024 16:27 UTC; 83 points) (EA Forum;
- Risks from AI persuasion by 24 Dec 2021 1:48 UTC; 76 points) (
- Memo on some neglected topics by 11 Nov 2023 2:01 UTC; 71 points) (EA Forum;
- A Survey of the Potential Long-term Impacts of AI by 18 Jul 2022 9:48 UTC; 63 points) (EA Forum;
- What’s important in “AI for epistemics”? by 24 Aug 2024 1:27 UTC; 62 points) (EA Forum;
- Against GDP as a metric for timelines and takeoff speeds by 29 Dec 2020 17:50 UTC; 47 points) (EA Forum;
- Project ideas: Epistemics by 4 Jan 2024 7:26 UTC; 43 points) (EA Forum;
- Project ideas: Epistemics by 5 Jan 2024 23:41 UTC; 42 points) (
- What’s important in “AI for epistemics”? by 24 Aug 2024 1:27 UTC; 40 points) (
- Helpful examples to get a sense of modern automated manipulation by 12 Nov 2023 20:49 UTC; 33 points) (
- Memo on some neglected topics by 11 Nov 2023 2:01 UTC; 28 points) (
- Is this a good way to bet on short timelines? by 28 Nov 2020 14:31 UTC; 17 points) (EA Forum;
- Persuasion Tools: AI takeover without AGI or agency? by 20 Nov 2020 16:56 UTC; 15 points) (EA Forum;
- Truthfulness, standards and credibility by 7 Apr 2022 10:31 UTC; 12 points) (
- 9 Apr 2022 0:48 UTC; 10 points) 's comment on aogara’s Quick takes by (EA Forum;
- 10 Jun 2022 19:48 UTC; 9 points) 's comment on Open Problems in AI X-Risk [PAIS #5] by (
- 11 Dec 2020 9:25 UTC; 6 points) 's comment on Epistemic Warfare by (
- 19 Nov 2021 10:34 UTC; 5 points) 's comment on “Biological anchors” is about bounding, not pinpointing, AI timelines by (EA Forum;
- 3 Jan 2021 2:32 UTC; 4 points) 's comment on Good altruistic decision-making as a deep basin of attraction in meme-space by (EA Forum;
- 18 Jan 2022 21:15 UTC; 3 points) 's comment on “Biological anchors” is about bounding, not pinpointing, AI timelines by (EA Forum;
The problem outlined in this post results from two major concerns on lesswrong: risks from advanced AI systems and irrationality due to parasitic memes.
It presents the problem of persuasion tools as continuous with the problems humanity has had with virulent ideologies and sticky memes, exacerbated by the increasing capability of narrowly intelligent machine learning systems to exploit biases in human thought. It provides (but doesn’t explore) two examples from history to support its hypothesis: the printing press as a partial cause of the 30 years war, and the radio as a partial cause of 20th century totalitarianism.
Especially those two concerns reminded me of Is Clickbait Destroying Our General Intelligence? (Eliezer Yudkowsky, 2018), which could be situated in this series of events:
Kokotajlo also briefly considers the hypothesis that epistemic conditions might have become better through the internet, but rejects it (for reasons that are not spelled out, but the answers to Have epistemic conditions always been this bad? (Wei Dai, 2021) might be illuminating). (Survivorship bias probably plays a large role here: epistemically unsound information is less likely to survive long-term trials for truth, especially in an environment where memes on the less truth-oriented side of the spectrum are in a harsher competition than memes on the more truth-oriented side).
This post was written a year ago, and didn’t make any concrete predictions (for a vignette of the future by the author, see What 2026 looks like (Daniel’s Median Future) (Daniel Kokotajlo, 2021)). My personal implied predictions under this worldview are something like this:
A large number (>100 mio.) of people in the western world (USA & EU) will interact with chatbots on a regular basis (e.g. more than once a week).
I think this isn’t yet the case: I’ve encountered chatbots mainly in the context of customer service, and don’t know anyone personally who has used a chatbot for entertainment for more than an afternoon (Western Europe). If we count automated personal assistants such as Alexa or Siri, this might be true.
It is revealed that a government spent a substantial amount of money (>$1B) on automating propaganda creation.
As far as I know, there hasn’t been any reveal of such a large-scale automated propaganda campaign (the wikipedia pages on Propaganda in China and in the US mention no such operations.
Online ideological conflicts spill over into the real world more often.
As I haven’t been following the news closely, I don’t have many examples here, but the 2020–21 United States election protests come to mind.
The internet becomes more fractured, into discrete walled gardens (e.g. into a Chinese Internet, US Blue Internet, and US Red Internet).
This seems to become more and more true, with sites such as Gab and the Fediverse gaining in popularity. However, it doesn’t seem like the US Red Internet has the technological capabilities to automate propaganda or build a complete walled garden, to the extent that the US Blue Internet or the Chinese internet do.
I found the text quite relevant both to thinking about possible alternative stories about the way in which AI could go wrong, and also to my personal life.
In the domain of AI safety, I became more convinced of the importance of aligning recommender systems to human values (also mentioned in the post), if they pose larger risk than commonly assumed, and provide a good ground for experimentation on alignment techniques. Whether aligning recommender systems is more important than aligning large language models seems like an important crux here: Are the short-term/long-term risks higher for recommender systems (i.e. reinforcement learners) larger than the risks from large language models? Which route appears more fruitful when trying to align more generally capable systems? As far as I can see, the alignment community is more interested in attempts to align large language models, compared to recommender systems, probably due to recent progress in that area and because it’s easier to test alignment in language models (?).
The scenarios in which AI powered memetic warfare significantly harm humanity can also be tied into research on the malicious use of AI, e.g. The Malicious Use of Artificial Intelligence: Forecasting,
Prevention, and Mitigation (Brundage et al. 2018). Policy tools from diplomacy with regard to biological, chemical and nuclear warfare could be applied to memetic and psychologcial warfare.
The text explicitely positions the dangers of persuasion tools as a risk factor, but more speculatively, they might also pose an existential risk in themselves, in two different scenarios:
If humans are very easy to manipulate by AI systems that are narrowly superhuman in the domain of h uman psychology, a scenario similar to Evolving to Extinction (Eliezer Yudkowsky, 2007) might occur: nearly everybody goes effectively insane at approximately the same time, resulting in the collapse of civilization.
Humans might become insane enough that further progress along relevant axes is halted, but not insa ne enough that civilization collapses. We get stuck oscillating around some technological level, unti l another existential catastrophe like nuclear war and resulting nuclear winter finishes us off.
On the personal side, after being fooled by people using GPT-3 to generate tweets and seeing at least one instance of observing someone asking a commenter for the MD5 hashsum of a
string to verify that the commenter was human (and the commenter failing that challenge), along with observing the increasingly negative effects of internet usage on my attention span, I decided to separate my place for sleeping & eating from the place where I use internet, with a ~10 minute commute between those two. I also decided to pay less attention to news stories/reddit/twitter, especially from sources affiliated with large governments, downloaded my favourite websites.
This post was relevant to my thoughts about alternative AI risk scenarios as well as drastic personal decisions, and I expect to give it a 1 or (more likely) a 4 in the final vote.
Thanks!
I really really didn’t mean to make or imply those predictions! That’s all much too soon, it’s only been a year! For a better sense of what my predictions were/are, see this vignette which was written half a year after Persuasion Tools and describes my “median” future.
Other than that, I agree with everything you say in this review. (And the predictions you imagine it making are reasonable, I just wouldn’t have said they’d happen within two years! I’m thinking more like five years. I’m also unsure whether it’ll ever rise to $1B in spending from a single government or whether chatbots will ever become a big deal (the best kinds of persuasion tools now, and possibly always, are non-chatbots).
Oh dear, I didn’t want to imply that those were your predictions! It was merely an exercise for myself to make the ideas in the post more concrete. I’ll revise my comment to make that clearer. Also apologies for not linking your median future vignette, that was an oversight on my part :-)
I’ll also update my review by making the predictions less conjunctive, perhaps you won’t endorse them anymore afterwards.
oh ok, thanks! Sorry if I misinterpreted you.