How worried should we be about personalization maximized for persuasion or engagement?
AI persuasion is in my top 5 concerns atm. Ime, people who don’t immediately see why AI persuasion is so important don’t understand how much power persuasion can give with just a couple levels of knowledge and how much it’s normally bottlenecked by time and scale of reach.
I happen to agree that persuasion is a huge issue for AI, but I also don’t see persuasion in the same way that some of you might.
I think the biggest risk for AI persuasion in 2025 is when a nefarious actor uses an AI model to aid in persuading a person or group of people; think classic agit prop or a state actor trying to influence diplomacy. Persuasion of this sort is a tale as old as civilization itself.
What I think the issue is going to be down the line is once the human hand guiding the AI is no longer necessary, and the agentic model (and eventually AGI) has its own goals, values, and desires. Both types of persuasion are bad, but the second type I just mentioned is a medium-to-long-term issue, while AI persuasion as a means to an end (run by a human) is a right now front burner issue.
AI persuasion is in my top 5 concerns atm. Ime, people who don’t immediately see why AI persuasion is so important don’t understand how much power persuasion can give with just a couple levels of knowledge and how much it’s normally bottlenecked by time and scale of reach.
I happen to agree that persuasion is a huge issue for AI, but I also don’t see persuasion in the same way that some of you might.
I think the biggest risk for AI persuasion in 2025 is when a nefarious actor uses an AI model to aid in persuading a person or group of people; think classic agit prop or a state actor trying to influence diplomacy. Persuasion of this sort is a tale as old as civilization itself.
What I think the issue is going to be down the line is once the human hand guiding the AI is no longer necessary, and the agentic model (and eventually AGI) has its own goals, values, and desires. Both types of persuasion are bad, but the second type I just mentioned is a medium-to-long-term issue, while AI persuasion as a means to an end (run by a human) is a right now front burner issue.
I pretty much agree with this.