Steelmanning might be particularly useful in cases where we have reason to believe those who have engaged most with the arguments are biased toward ones side of the debate.
As described in But Have They Engaged with the Arguments?, perhaps a reason many who dismiss AI risk haven’t engaged much with the arguments is the selection effect of engaging more if the first arguments one hears seems true. Therefore it might be useful to steelman arguments by generally reasonable people against AI risk that might seem off due to lack of engagement with existing counterarguments, to extract potentially relevant insights (though perhaps an alternative is funding more reasonable skeptics to engage with the arguments much more deeply?).
Steelmanning might be particularly useful in cases where we have reason to believe those who have engaged most with the arguments are biased toward ones side of the debate.
As described in But Have They Engaged with the Arguments?, perhaps a reason many who dismiss AI risk haven’t engaged much with the arguments is the selection effect of engaging more if the first arguments one hears seems true. Therefore it might be useful to steelman arguments by generally reasonable people against AI risk that might seem off due to lack of engagement with existing counterarguments, to extract potentially relevant insights (though perhaps an alternative is funding more reasonable skeptics to engage with the arguments much more deeply?).