It reminds me a little of a story in Surely you’re joking, Mr Feynman in which someone who thinks Feynman is a phony decides to show him up by having someone greet him (in some social situation where he might be embarrassed to admit ignorance) in Chinese. So Feynman meets Chinese speaker; Chinese speaker greets Feynman in Chinese; Feynman responds with fluent pseudo-Chinese doubletalk; Chinese speaker retires in confusion, thinking that Feynman speaks Cantonese where she speaks Mandarin (or maybe it’s the other way around).
It’s rather a weird story and I can’t escape the suspicion that somehow Feynman’s telling of it isn’t perfectly honest. But the relevant thing here is that Feynman, presented with a sudden demand to produce a language he doesn’t know, responds not by saying “I’m sorry, Dave, I can’t do that”, but by fluently producing nonsense that resembles that language. Which is pretty much what GPT-3 is doing when you demand that it speak to you in Morse code.
Or consider “speaking in tongues” as practiced in some religious communities; here there isn’t a specific other language involved, but in some sense what’s happening is that people are put in a position where they look good if they start speaking Mysterious Angelic Languages, so (in at least some cases) they start emitting fluent pseudo-language. (I am assuming here that in fact they are not magically speaking Mysterious Angelic Languages; my understanding is that quite aside from the prior improbability of that, when linguists look at samples of glossolalia it doesn’t look very language-like and does contain a surprisingly large number of short-term repetitions, rather like some of that GPT-3 output.)
None of this is the exact same thing that GPT-3 is doing when asked to translate into More code, but it’s similar enough that I don’t find GPT-3′s output so completely alien. I think that when a person is put in a situation where fluent nonsense might make them look better than honest admission of ignorance, fluent nonsense isn’t so unusual, and arguably GPT-3 is always in a position where fluency is required.
It reminds me a little of a story in Surely you’re joking, Mr Feynman in which someone who thinks Feynman is a phony decides to show him up by having someone greet him (in some social situation where he might be embarrassed to admit ignorance) in Chinese. So Feynman meets Chinese speaker; Chinese speaker greets Feynman in Chinese; Feynman responds with fluent pseudo-Chinese doubletalk; Chinese speaker retires in confusion, thinking that Feynman speaks Cantonese where she speaks Mandarin (or maybe it’s the other way around).
It’s rather a weird story and I can’t escape the suspicion that somehow Feynman’s telling of it isn’t perfectly honest. But the relevant thing here is that Feynman, presented with a sudden demand to produce a language he doesn’t know, responds not by saying “I’m sorry, Dave, I can’t do that”, but by fluently producing nonsense that resembles that language. Which is pretty much what GPT-3 is doing when you demand that it speak to you in Morse code.
Or consider “speaking in tongues” as practiced in some religious communities; here there isn’t a specific other language involved, but in some sense what’s happening is that people are put in a position where they look good if they start speaking Mysterious Angelic Languages, so (in at least some cases) they start emitting fluent pseudo-language. (I am assuming here that in fact they are not magically speaking Mysterious Angelic Languages; my understanding is that quite aside from the prior improbability of that, when linguists look at samples of glossolalia it doesn’t look very language-like and does contain a surprisingly large number of short-term repetitions, rather like some of that GPT-3 output.)
None of this is the exact same thing that GPT-3 is doing when asked to translate into More code, but it’s similar enough that I don’t find GPT-3′s output so completely alien. I think that when a person is put in a situation where fluent nonsense might make them look better than honest admission of ignorance, fluent nonsense isn’t so unusual, and arguably GPT-3 is always in a position where fluency is required.
That doesn’t seem to far off from the Chinese Room Experiment: https://en.wikipedia.org/wiki/Chinese_room
Coincidentally also involving Chinese—or is that irony? I can’t ever understand irony technically; can GPT3? lol.