Random thought: We should expect LLM’s trained on user responses to have much more situational knowledge than early LLM’s trained on the pre-Chatbot internet because users will occasionally make reference to the meta-context.
It may be possible to get some of this information from pre-training on chatlogs/excerpts that make their way onto the internet, but the information won’t be quite as accessible because of differences in the context.
Random thought: We should expect LLM’s trained on user responses to have much more situational knowledge than early LLM’s trained on the pre-Chatbot internet because users will occasionally make reference to the meta-context.
It may be possible to get some of this information from pre-training on chatlogs/excerpts that make their way onto the internet, but the information won’t be quite as accessible because of differences in the context.