Sorry, but you’re overthinking what’s required. Simply being able to reliably use existing techniques is more than enough to hack the minds of large groups of people, no complex new research needed.
Here is a concrete example.
First, if you want someone’s attention, just make them feel listened to. ELIZA could already successfully do this in the 1970s, ChatGPT is better. The result is what therapists call transference, and causes the person to wish to please the AI.
Now the AI can use the same basic toolkit mastered by demagogues throughout history. Use simple and repetitive language to hit emotional buttons over and over again. Try to get followers to form a social group. Switch positions every so often. Those that pay insufficient attention will have the painful experience of being attacked by their friends, and it forces everyone to pay more attention.
All of this is known and effective. What AI brings is that it can use individualized techniques, at scale, to suck people into many target groups. And once they are in those groups, it can use the demagogue’s techniques to erase differences and get them aligned into ever bigger groups.
The result is that, as Sam Altman predicted, LLMs will prove superhumanly persuasive. They can beat the demagogues at their own game by seeding the mass persuasion techniques by individual attention at scale.
Do you think that this isn’t going to happen? Social media accidentally did a lot of this at scale. Now it is just a question of weaponizing something like TikTok.
The effect isn’t large, but you’d lose that bet. See Nonreplicable publications are cited more than replicable ones.