First, IQ 100 is only useful in ruling out easy to persuade IQ <80 people. There are likely other correlates of “easy to persuade” that depend on how the AI is doing the persuading.
Second, super-persuasion is about scalability and cost. Bribery doesn’t scale because actors have limited amounts of money. <$100 in inference and amortised training should be able persuade a substantial fraction of people.
Achieving this requires a scalable “training environment” to generate a non-goodhartable reward signal. AI trained to persuade on a large population of real users (EG:for affiliate marketing purposes) would be a super-persuader. Once a large company decides to do this at scale results will be much better than anything a hobbyist can do. Synthetic evaluation environments (EG:LLM simulations of users) can help too limited by their exploitability in ways that don’t generalise to humans.
There are no regulations against social engineering in contrast to hacking computers. Some company will develop these capabilities which can then be used for nefarious purposes with the usual associated risks like whistleblowers.
Skill issue.
First, IQ 100 is only useful in ruling out easy to persuade IQ <80 people. There are likely other correlates of “easy to persuade” that depend on how the AI is doing the persuading.
Second, super-persuasion is about scalability and cost. Bribery doesn’t scale because actors have limited amounts of money. <$100 in inference and amortised training should be able persuade a substantial fraction of people.
Achieving this requires a scalable “training environment” to generate a non-goodhartable reward signal. AI trained to persuade on a large population of real users (EG:for affiliate marketing purposes) would be a super-persuader. Once a large company decides to do this at scale results will be much better than anything a hobbyist can do. Synthetic evaluation environments (EG:LLM simulations of users) can help too limited by their exploitability in ways that don’t generalise to humans.
There are no regulations against social engineering in contrast to hacking computers. Some company will develop these capabilities which can then be used for nefarious purposes with the usual associated risks like whistleblowers.