Separately from my specific comment on Go, I think that “people are misinformed in one direction, so I will say something exaggerated and false in the other direction to make them snap out of their misconception” is not a great strategy. They might notice that the thing you said is not true, ask a question on it, and then you need to back-track and they get confirmation of their belief that these AI people always exaggerate everything.
I have once seen an AI safety advocate once talking to a skeptical person who was under the impression that AIs still can’t piece together three logical steps. The advocate at some point said the usual line about the newest AIs having reached “PhD level capabilities” and the audience immediately called them out on that, and then they needed to apologize that of course they only meant PhD-level on specific narrow tests, and they didn’t get to correct any of the audience’s misconceptions.
Also, regardless of strategic considerations, I think saying false things is bad.
Yep. I agree with this. As I wrote, I think it’s a key skill to manage to hold the heart of the issue in a way that is clear and raw, while also not going overboard. There’s a milquetoast failure mode and an exaggeration failure mode and it’s important to dodge both. I think the quoted text fails to thread the needle, and was agreeing with Ryan (and you) on that.
Separately from my specific comment on Go, I think that “people are misinformed in one direction, so I will say something exaggerated and false in the other direction to make them snap out of their misconception” is not a great strategy. They might notice that the thing you said is not true, ask a question on it, and then you need to back-track and they get confirmation of their belief that these AI people always exaggerate everything.
I have once seen an AI safety advocate once talking to a skeptical person who was under the impression that AIs still can’t piece together three logical steps. The advocate at some point said the usual line about the newest AIs having reached “PhD level capabilities” and the audience immediately called them out on that, and then they needed to apologize that of course they only meant PhD-level on specific narrow tests, and they didn’t get to correct any of the audience’s misconceptions.
Also, regardless of strategic considerations, I think saying false things is bad.
Yep. I agree with this. As I wrote, I think it’s a key skill to manage to hold the heart of the issue in a way that is clear and raw, while also not going overboard. There’s a milquetoast failure mode and an exaggeration failure mode and it’s important to dodge both. I think the quoted text fails to thread the needle, and was agreeing with Ryan (and you) on that.