I am curious how well LLMs would do convincing people of any argument in general. Do they do better at convincing AI is no big risk? This is a practical concern as that is the kind of thing that would be done as a prelude to takeover. But even if this was purely intellectual, it would frame your results in a way that seems more meaningful. If LLMs are worse at convincing people that AI is a risk compared to, say, climate change or another pandemic that would be an interesting result.
I believe that as this technology gets better, it will become more persuasive regardless of truth and that could seriously poison discourse. Really amplify tribalism to have an AI sycophant telling you how wrong your enemies are all the time no matter what you believe. Not that I am accusing you of poisoning the well, but this seems like a very close concern to what was voiced in this recent post.
I am curious how well LLMs would do convincing people of any argument in general. Do they do better at convincing AI is no big risk? This is a practical concern as that is the kind of thing that would be done as a prelude to takeover. But even if this was purely intellectual, it would frame your results in a way that seems more meaningful. If LLMs are worse at convincing people that AI is a risk compared to, say, climate change or another pandemic that would be an interesting result.
I believe that as this technology gets better, it will become more persuasive regardless of truth and that could seriously poison discourse. Really amplify tribalism to have an AI sycophant telling you how wrong your enemies are all the time no matter what you believe. Not that I am accusing you of poisoning the well, but this seems like a very close concern to what was voiced in this recent post.
We tried to go pretty hard on making sure it only makes correct and valid arguments and isn’t misleading.