Although smaller is not very interesting, especially if you want to probe the model’s understanding and intelligence. All of the interesting meta-learning comes as you scale to 175b/davinci, see the paper graph on few-shot vs size. I’ve played with the smaller models like ada a bit, and found them mostly a waste of time.
In case you haven’t already found out, the free version has been updated to be a smaller version of GPT-3. Confirmed on twitter https://twitter.com/nickwalton00/status/1284842368105975810?s=19
Although smaller is not very interesting, especially if you want to probe the model’s understanding and intelligence. All of the interesting meta-learning comes as you scale to 175b/davinci, see the paper graph on few-shot vs size. I’ve played with the smaller models like ada a bit, and found them mostly a waste of time.