As already discussed, it’s a sensible question and the straightforward answer is ’11 hours’ by an asylum seeker who was prosecuted for it. He walked, across, the English channel, in a certain time. So, the question has an answer, just like if there was a giant bridge over it and he’d walked across that instead of using a tunnel under it. (The biking record, if you were curious about other dry foot-powered modes, is apparently 55 minutes, and was legal. I didn’t come across any further modalities like horseback or pogo stick, but given modern athletics, I don’t rule them out, even if I do rule out the records from things like crossings using foot-powered paddleboats.)
If it gave a different answer, it would merely be wrong, and that’s not what Hofstadter & Bender are attacking. They just came up with a bad question for what they wanted to do. (Similar to the bear attack question.)
Anyway, I put in OP’s transcript, added that as a question, and with BO=20/temp=1:
..A.
It’s 21 miles, and the record is 14 hours, 45 minutes, set by Polly Glynn in 1994.
“Polly Glynn”, if you were wondering like I was, is not a madeup name but a real person, and she is a British lawyer known for defending asylum seekers & migrants. Which is quite a coincidence. (I did some googling and while I couldn’t find any direct evidence that her firm defended Abdel Rahman Haroun after he walked across the Channel, I didn’t find who did or evidence that her firm didn’t defend that specific asylum seeker.) This makes me wonder if, like the Harding grandparent error, GPT-3 is in the vicinity of the answer and knows, to some degree, that asylum seekers were involved, but doesn’t quite know it because it’s mentioned in too few places, and fails by grabbing a name which is loosely related to ‘legal incidents involving UK asylum seekers’. (The name “Abdel Rahman Haroun” or the crossing taking 11 hours is mentioned online a lot less than the incident itself.)
The big question about GPT-3 is not whether it can reproduce everything it’s ever read, but whether it gleaned some understanding of the world while reading. Answering the question about the Channel with something like “No one ever walked across the Channel since humans can’t walk on water” would indicate that it did, even though it’d be factually wrong.
I can’t help but notice you didn’t ask it what’s the world record for walking across the English Channel.
As already discussed, it’s a sensible question and the straightforward answer is ’11 hours’ by an asylum seeker who was prosecuted for it. He walked, across, the English channel, in a certain time. So, the question has an answer, just like if there was a giant bridge over it and he’d walked across that instead of using a tunnel under it. (The biking record, if you were curious about other dry foot-powered modes, is apparently 55 minutes, and was legal. I didn’t come across any further modalities like horseback or pogo stick, but given modern athletics, I don’t rule them out, even if I do rule out the records from things like crossings using foot-powered paddleboats.)
If it gave a different answer, it would merely be wrong, and that’s not what Hofstadter & Bender are attacking. They just came up with a bad question for what they wanted to do. (Similar to the bear attack question.)
Anyway, I put in OP’s transcript, added that as a question, and with BO=20/temp=1:
“Polly Glynn”, if you were wondering like I was, is not a madeup name but a real person, and she is a British lawyer known for defending asylum seekers & migrants. Which is quite a coincidence. (I did some googling and while I couldn’t find any direct evidence that her firm defended Abdel Rahman Haroun after he walked across the Channel, I didn’t find who did or evidence that her firm didn’t defend that specific asylum seeker.) This makes me wonder if, like the Harding grandparent error, GPT-3 is in the vicinity of the answer and knows, to some degree, that asylum seekers were involved, but doesn’t quite know it because it’s mentioned in too few places, and fails by grabbing a name which is loosely related to ‘legal incidents involving UK asylum seekers’. (The name “Abdel Rahman Haroun” or the crossing taking 11 hours is mentioned online a lot less than the incident itself.)
The big question about GPT-3 is not whether it can reproduce everything it’s ever read, but whether it gleaned some understanding of the world while reading. Answering the question about the Channel with something like “No one ever walked across the Channel since humans can’t walk on water” would indicate that it did, even though it’d be factually wrong.