The fact that it uses emojis, that it reports alleged emotions, the way it sometimes responds with an excessive amount of short sentences, the strong tendency for conspirational thinking and absurd explanations, … ChatGPT has basically none of those properties, or only to a very small degree.
I don’t know what has gone wrong at Microsoft here. Apparently there were some disagreements with how OpenAI finetunes things.
Edit: The repetitive sentences and the conspiration thinking might not be the result of different fine-tuning. Maybe Bing Chat runs on a smaller model in order to save on inference costs, like Curie or Babbage, or some other smaller GPT-3.5/GPT-4 model. A smaller model would have issues which ChatGPT doesn’t have. Google already announced in the recent blog post that they won’t use their large LaMDA 2 model for search, but a smaller one. Smaller models have smaller inference cost.
The fact that it uses emojis, that it reports alleged emotions, the way it sometimes responds with an excessive amount of short sentences, the strong tendency for conspirational thinking and absurd explanations, … ChatGPT has basically none of those properties, or only to a very small degree.
I don’t know what has gone wrong at Microsoft here. Apparently there were some disagreements with how OpenAI finetunes things.
Edit: The repetitive sentences and the conspiration thinking might not be the result of different fine-tuning. Maybe Bing Chat runs on a smaller model in order to save on inference costs, like Curie or Babbage, or some other smaller GPT-3.5/GPT-4 model. A smaller model would have issues which ChatGPT doesn’t have. Google already announced in the recent blog post that they won’t use their large LaMDA 2 model for search, but a smaller one. Smaller models have smaller inference cost.