OpenAI models (particularly 5-thinking) have long had a fetish for jargon, cramming their sentences as full of it as the situation allows (and sometimes fuller).
I also noticed this when asking the GPT-5 Thinking chat model slightly advanced questions about statistics. Its answers are quite technical, somewhat reminiscent of Wikipedia mathematics articles, apparently assuming I was an expert. Gemini and other models try to explain things for laymen. Perhaps OpenAI wanted to save tokens? Seems unlikely.
LLMs have to infer every time whether you’re an expert or not, and sometimes they don’t have a lot to work with.
I had a funny experience with Claude last night where I asked a dumb physics question and it gave a nice high-level answer with some nods to theories it was referencing, but when I asked about one of them in a side conversation, it saw my (copied) use of obscure physics jargon, assumed I was an expert, and gave me a wall of equations.
(Memories can help over time if you’re asking about the same areas and it’s sufficiently obvious that the AI should remember that you don’t know things)
OpenAI models (particularly 5-thinking) have long had a fetish for jargon, cramming their sentences as full of it as the situation allows (and sometimes fuller).
I also noticed this when asking the GPT-5 Thinking chat model slightly advanced questions about statistics. Its answers are quite technical, somewhat reminiscent of Wikipedia mathematics articles, apparently assuming I was an expert. Gemini and other models try to explain things for laymen. Perhaps OpenAI wanted to save tokens? Seems unlikely.
LLMs have to infer every time whether you’re an expert or not, and sometimes they don’t have a lot to work with.
I had a funny experience with Claude last night where I asked a dumb physics question and it gave a nice high-level answer with some nods to theories it was referencing, but when I asked about one of them in a side conversation, it saw my (copied) use of obscure physics jargon, assumed I was an expert, and gave me a wall of equations.
(Memories can help over time if you’re asking about the same areas and it’s sufficiently obvious that the AI should remember that you don’t know things)