A Small Negative Result on Debate

Some context for this new arXiv paper from my group at NYU:

  • We’re working toward sandwiching experiments using our QuALITY long-document QA dataset, with reading time playing the role of the the expertise variable. Roughly: Is there some way to get humans to reliably answer hard reading-comprehensions questions about a ~5k-word text, without ever having the participants or any other annotators take the ~20 minutes that it would require to actually read the text.

  • This is an early writeup of some negative results. It’s earlier in the project that I would usually write something like this up, but some authors had constraints that made it worthwhile, so I’m sharing what we have.

  • Here, we tried to find out if single-turn debate leads to reliable question answering: If we give people high-quality arguments for and against each (multiple-choice) answer choices, supported by pointers to key quotes in the source text, can they reliably answer the questions under a time limit?

  • We did this initial experiment in an oracle setting; We had (well-incentivized, skilled) humans write the arguments, rather than an LM. Given the limits of current LMs on long texts, we expect this to give us more information about whether this research direction is going anywhere.

  • It didn’t really work: Our human annotators answered at the same low accuracy with and without the arguments. The selected pointers to key quotes did help a bit, though.

  • We’re planning to keep pursuing the general strategy, with multi-turn debate—where debaters can rebut one another’s arguments and evidence—as the immediate next step.

  • Overall, I take this as a very slight update in the direction that debate is difficult to use in practice as an alignment strategy. Slight enough that this probably shouldn’t change your view of debate unless you were, for some reason, interested in this exact constrained/​trivial application of it.