LLMs have to infer every time whether you’re an expert or not, and sometimes they don’t have a lot to work with.
I had a funny experience with Claude last night where I asked a dumb physics question and it gave a nice high-level answer with some nods to theories it was referencing, but when I asked about one of them in a side conversation, it saw my (copied) use of obscure physics jargon, assumed I was an expert, and gave me a wall of equations.
(Memories can help over time if you’re asking about the same areas and it’s sufficiently obvious that the AI should remember that you don’t know things)
LLMs have to infer every time whether you’re an expert or not, and sometimes they don’t have a lot to work with.
I had a funny experience with Claude last night where I asked a dumb physics question and it gave a nice high-level answer with some nods to theories it was referencing, but when I asked about one of them in a side conversation, it saw my (copied) use of obscure physics jargon, assumed I was an expert, and gave me a wall of equations.
(Memories can help over time if you’re asking about the same areas and it’s sufficiently obvious that the AI should remember that you don’t know things)