FYI your ‘octopus paper’ link is to Stochastic Parrots; it should be this link.
Though at least in the Quanta piece, Bender doesn’t acknowledge any update of that sort.
I’ve seen other quotes from Bender & relevant coauthors that suggest they haven’t really updated, which I find fascinating. I’d love to have the opportunity to talk with them about it and understand better how their views have remained consistent despite the evidence that’s emerged since the papers were published.
So the octopus paper argument must be wrong somewhere.
It makes a very intuitively compelling argument! I think that, as with many confusions about the Chinese Room, the problem is that our intuitions fail at the relevant scale. Given an Internet’s worth of discussion of bears and sticks and weapons, the hyper-intelligent octopus’s model of those things is rich enough for the octopus to provide advice about them that would work in the real world, even if it perhaps couldn’t recognize a bear by sight. For example it would know that sticks have a certain distribution of mass, and are the sorts of things that could be bound together by rope (which it knows is available because of the coconut catapult), and that the combined sticks might have enough mass to serve as a weapon, and what amounts of force would be harmful to a bear, etc. But it’s very hard to understand just how rich those models can be when our intuitions are primed by a description of two people casually exchanging messages.
FYI your ‘octopus paper’ link is to Stochastic Parrots; it should be this link.
I’ve seen other quotes from Bender & relevant coauthors that suggest they haven’t really updated, which I find fascinating. I’d love to have the opportunity to talk with them about it and understand better how their views have remained consistent despite the evidence that’s emerged since the papers were published.
It makes a very intuitively compelling argument! I think that, as with many confusions about the Chinese Room, the problem is that our intuitions fail at the relevant scale. Given an Internet’s worth of discussion of bears and sticks and weapons, the hyper-intelligent octopus’s model of those things is rich enough for the octopus to provide advice about them that would work in the real world, even if it perhaps couldn’t recognize a bear by sight. For example it would know that sticks have a certain distribution of mass, and are the sorts of things that could be bound together by rope (which it knows is available because of the coconut catapult), and that the combined sticks might have enough mass to serve as a weapon, and what amounts of force would be harmful to a bear, etc. But it’s very hard to understand just how rich those models can be when our intuitions are primed by a description of two people casually exchanging messages.
Perhaps relevant, she famously doesn’t like the arXiv, so maybe on principle she’s disregarding all evidence not from “real publications.”