As a Debater’s capabilities increase, I expect it to become more able to convince a human of both true propositions and also of false propositions. Particularly when the propositions in question are about complex (real-world) things. And in order for Debate to be useful, I think it would indeed have to be able to handle very complex propositions like “running so-and-so AI software would be unsafe”. For such a proposition P, on the limit of Debater capabilities, I think a Debater would have roughly as easy a time convincing a human of ¬P as of P. Hence: As Debater capabilities increase, if the judge is human and the questions being debated are complex, I’d tentatively expect the Debaters’ arguments to mostly be determined by something other than “what is true”.
I.e., the approximate opposite of
“in the limit of argumentative prowess, the optimal debate strategy converges to making valid arguments for true conclusions.”
As a Debater’s capabilities increase, I expect it to become more able to convince a human of both true propositions and also of false propositions. Particularly when the propositions in question are about complex (real-world) things. And in order for Debate to be useful, I think it would indeed have to be able to handle very complex propositions like “running so-and-so AI software would be unsafe”. For such a proposition P, on the limit of Debater capabilities, I think a Debater would have roughly as easy a time convincing a human of ¬P as of P. Hence: As Debater capabilities increase, if the judge is human and the questions being debated are complex, I’d tentatively expect the Debaters’ arguments to mostly be determined by something other than “what is true”.
I.e., the approximate opposite of