I’ve given responses before where I go into detail about how I disagree with some public presentation on AI; the primary example is this one from January 2017, which Yvain also responded to. Generally this is done after messaging the draft to the person in question, to give them a chance to clarify or correct misunderstandings (and to be cooperative instead of blindsiding them).
I generally think it’s counterproductive to ‘partially engage’ or to be dismissive; for example, one consequence of XiXiDu’s interviews with AI experts was that some of them (that received mostly dismissive remarks in the LW comments) came away with the impression that people interested in AI risk were jerks who aren’t really worth engaging with. For example, I might think someone is confused if they think climate change is more important than AI safety, but I don’t think that it’s useful to just tell them that they’re confused or off-handedly remark that “of course AI safety is more important,” since the underlying considerations (like the difference between catastrophic risks and existential risks) are actually non-obvious.
I’ve given responses before where I go into detail about how I disagree with some public presentation on AI; the primary example is this one from January 2017, which Yvain also responded to. Generally this is done after messaging the draft to the person in question, to give them a chance to clarify or correct misunderstandings (and to be cooperative instead of blindsiding them).
I generally think it’s counterproductive to ‘partially engage’ or to be dismissive; for example, one consequence of XiXiDu’s interviews with AI experts was that some of them (that received mostly dismissive remarks in the LW comments) came away with the impression that people interested in AI risk were jerks who aren’t really worth engaging with. For example, I might think someone is confused if they think climate change is more important than AI safety, but I don’t think that it’s useful to just tell them that they’re confused or off-handedly remark that “of course AI safety is more important,” since the underlying considerations (like the difference between catastrophic risks and existential risks) are actually non-obvious.