This video conjectures that GPT-3 was literally just saving everything from the training corpus and remixing them, without complex reasoning. https://www.youtube.com/watch?v=SY5PvZrJhLE
So, GPT-3 is something like Giant look-up table? Which approximate the answer between a few nearest recorded answers, but the whole actual intellectual work was performed by those who created the training dataset?
Thanks for sharing that. Now having watched the video, I am updating towards that position. I’m now only something like 80% confident that reasoning isn’t a roadblock. I look forward to learning whether GPT-3 can do word scrambling tasks.
This video conjectures that GPT-3 was literally just saving everything from the training corpus and remixing them, without complex reasoning. https://www.youtube.com/watch?v=SY5PvZrJhLE
The same conjecture could work for GPT-I
So, GPT-3 is something like Giant look-up table? Which approximate the answer between a few nearest recorded answers, but the whole actual intellectual work was performed by those who created the training dataset?
Thanks for sharing that. Now having watched the video, I am updating towards that position. I’m now only something like 80% confident that reasoning isn’t a roadblock. I look forward to learning whether GPT-3 can do word scrambling tasks.