Some spillover from anti-hallucination training? Causing the model to explicitly double check any data it gets, and be extremely skeptical of all things that don’t bounce against its dated “common knowledge”?
This model is extremely prone to hallucinate when there isn’t a clear answer, so I’m more inclined to believe that it’s unusually prone to make things up.
Some spillover from anti-hallucination training? Causing the model to explicitly double check any data it gets, and be extremely skeptical of all things that don’t bounce against its dated “common knowledge”?
This model is extremely prone to hallucinate when there isn’t a clear answer, so I’m more inclined to believe that it’s unusually prone to make things up.