NLP Position Paper: When Combatting Hype, Proceed with Caution

Linkpost for https://​​cims.nyu.edu/​​~sbowman/​​bowman2021hype.pdf. To appear on arXiv shortly.

I’m sharing a position paper I put together as an attempt to introduce general NLP researchers to AI risk concerns. From a few discussions at *ACL conferences, it seems like a pretty large majority of active researchers aren’t aware of the arguments at all, or at least aren’t aware that they have any connection to NLP and large language model work.

The paper makes a slightly odd multi-step argument to try to connect to active debates in the field:

  • It’s become extremely common in NLP papers/​talks to claim or imply that NNs are too brittle to use, that they aren’t doing anything that could plausibly resemble language understanding, and that this is a pretty deep feature of NNs that we don’t know how to fix. These claims sometimes come with evidence, but it’s often bad evidence, like citations to failures in old systems that we’ve since improved upon significantly. Weirdly, this even happens in papers that themselves show positive results involving NNs.

  • This seems to be coming from concerns about real-world harms: Current systems are pretty biased, and we don’t have great methods for dealing with that, so there’s a pretty widely-shared feeling that we shouldn’t be deploying big NNs nearly as often as we are. The reasoning seems to go: If we downplay the effectiveness of this technology, that’ll discourage its deployment.

  • But is that actually the right way to minimize the risk of harms? We should expect the impacts of these technologies to grow dramatically as they get better—the basic AI risk arguments go here—and we’ll need to be prepared for those impacts. Downplaying the progress that we’re making, both to each other and to outside stakeholders, limits our ability to foresee potentially-impactful progress or prepare for it.

I’ll be submitting this to ACL in a month. Comments/​criticism welcome, here or privately (bowman@nyu.edu).