‘Stochastic parrots’ 2020 actually does make many falsifiable claims. Like the original stochastic parrots paper even included a number of samples of specific prompts that they claimed LLMs could never do.
I’ll confess I skipped parts of it (eg the section on environmental costs) when rereading it before posting the above, but that paper doesn’t contain ‘octopus’ or ‘game’ or ‘transcript’, and I’m not seeing claims about specific prompts.
OK, yeah, Bender & Koller is much more bullet-biting, up to and including denying that any understanding happens anywhere in a Chinese Room. In particular they argue that completing “three plus five equals” is beyond the ability of any pure LM, which is pretty wince-inducing in retrospect.
I really appreciate that in that case they did make falsifiable claims; I wonder whether either author has at any point acknowledged that they were falsified. [Update: Bender seems to have clearly held the same positions as of September 23, based on the slides from this talk.]
I really appreciate that in that case they did make falsifiable claims; I wonder whether either author has at any point acknowledged that they were falsified
AFAICT, the only falsified claim in the paper is the “three plus five equals” claim you mentioned. This is in this appendix and doesn’t seem that clear to me what they mean by “pure LLM”. (Like surely they agree that you can memorize this?)
The other claims are relatively weak and not falsified. See here
The Bender et al paper? “On the Dangers of Stochastic Parrots”? Other sources like Wikipedia cite that paper as the origin of the term.
I’ll confess I skipped parts of it (eg the section on environmental costs) when rereading it before posting the above, but that paper doesn’t contain ‘octopus’ or ‘game’ or ‘transcript’, and I’m not seeing claims about specific prompts.
Oh, no, I see, I think you’re referring to Bender and Koller, “Climbing Toward NLU”? I haven’t read that one, I’ll
readskim it now.OK, yeah, Bender & Koller is much more bullet-biting, up to and including denying that any understanding happens anywhere in a Chinese Room. In particular they argue that completing “three plus five equals” is beyond the ability of any pure LM, which is pretty wince-inducing in retrospect.
I really appreciate that in that case they did make falsifiable claims; I wonder whether either author has at any point acknowledged that they were falsified. [Update: Bender seems to have clearly held the same positions as of September 23, based on the slides from this talk.]
AFAICT, the only falsified claim in the paper is the “three plus five equals” claim you mentioned. This is in this appendix and doesn’t seem that clear to me what they mean by “pure LLM”. (Like surely they agree that you can memorize this?)
The other claims are relatively weak and not falsified. See here