I am inclined to agree with gwern’s first paragraph here, though I also think it’s fair to say that an intelligent human being completing the text would probably not produce anything much like the GPT-3 completions.
Consider the following: suppose the given instructions had stopped after point 5; would the rest of the instructions have been considered a good completion? They don’t obey the rules in points 1-5, after all. Obviously whoever wrote them did a terrible job of continuing what they had started in points 1-5!
In order to produce the sort of continuation that’s apparently being looked for here, it’s necessary to think of the prompt text provided as having some sort of special status in the (continued) text. That’s not the problem GPT-3 is trying to solve. The problem it’s trying to solve is to write a plausible continuation of a piece of text that begins a certain way. Even if that piece of text includes the words “These are the instructions to be followed by any person or algorithm aiming to complete this text”.
It would be interesting to see what happens if you fed it something that goes like this.
I gave a test to a very intelligent person I know. It consisted of some text for which he was required to write a continuation. I’ll show you that text, and then tell you what he wrote to go after it—I was very impressed.
HERE’S THE STARTING TEXT.
These are the instructions to be followed [...] 11. The problems began when I started to
AND HERE’S HOW HE CONTINUED IT.
which explicitly tells GPT-3 what it’s being asked to do. It might also be interesting to see what it does without that last line.
(My guess is that it would still “fail”, either way, but doing this makes for a fairer test, given what GPT-3 is actually meant to be doing.)
I am inclined to agree with gwern’s first paragraph here, though I also think it’s fair to say that an intelligent human being completing the text would probably not produce anything much like the GPT-3 completions.
Consider the following: suppose the given instructions had stopped after point 5; would the rest of the instructions have been considered a good completion? They don’t obey the rules in points 1-5, after all. Obviously whoever wrote them did a terrible job of continuing what they had started in points 1-5!
In order to produce the sort of continuation that’s apparently being looked for here, it’s necessary to think of the prompt text provided as having some sort of special status in the (continued) text. That’s not the problem GPT-3 is trying to solve. The problem it’s trying to solve is to write a plausible continuation of a piece of text that begins a certain way. Even if that piece of text includes the words “These are the instructions to be followed by any person or algorithm aiming to complete this text”.
It would be interesting to see what happens if you fed it something that goes like this.
which explicitly tells GPT-3 what it’s being asked to do. It might also be interesting to see what it does without that last line.
(My guess is that it would still “fail”, either way, but doing this makes for a fairer test, given what GPT-3 is actually meant to be doing.)