The second example tokenizes differently as [′ r’, ‘ieden’, ‘heit’] because of the space, so the LLM is using information memorized about more common tokens. You can check in https://platform.openai.com/tokenizer
The second example tokenizes differently as [′ r’, ‘ieden’, ‘heit’] because of the space, so the LLM is using information memorized about more common tokens. You can check in https://platform.openai.com/tokenizer