We had the model for ChatGPT in the API for I don’t know 10 months or something before we made ChatGPT. And I sort of thought someone was going to just build it or whatever and that enough people had played around with it.
I assume he’s talking about text-davinci-002, a GPT 3.5 model supervised-finetuned on InstructGPT data. And he was expecting someone to finetune it on dialog data with OpenAI’s API. I wonder how that would have compared to ChatGPT, which was finetuned with RL and can’t be replicated through the API.
I assume he’s talking about text-davinci-002, a GPT 3.5 model supervised-finetuned on InstructGPT data. And he was expecting someone to finetune it on dialog data with OpenAI’s API. I wonder how that would have compared to ChatGPT, which was finetuned with RL and can’t be replicated through the API.
You can’t finetune GPT-3.5 through the API, just GPT-3