I agree. And I thought Arthur Breitman had a good point on one of the related Twitter threads:
GPT-3 didn’t “pretend” not to know. A lot of this is the AI dungeon environment. If you just prompt the raw GPT-3 with: “A definition of a monotreme follows” it’ll likely do it right. But if you role play, sure, it’ll predict that your stoner friend or young nephew don’t know.
I agree. And I thought Arthur Breitman had a good point on one of the related Twitter threads: