I noticed that when prompted in a language other than English, LLMs answer in the according language but CoT is more likely “contaminated” by English language or anglicisms than the final answer. Like LLMs were more naturally “thinking” in English language, what wouldn’t be a surprise given their training data. I don’t know if you would account that as not exhibiting good grammar.
I have noticed that Qwen3 will happily think in English or Chinese, with virtually no Chinese contamination in the English. But if I ask it a question in French, it typically thinks entirely in English.
It would be really interesting to try more languages and a wider variety of questions, to see if there’s a clear pattern here.
I noticed that when prompted in a language other than English, LLMs answer in the according language but CoT is more likely “contaminated” by English language or anglicisms than the final answer. Like LLMs were more naturally “thinking” in English language, what wouldn’t be a surprise given their training data. I don’t know if you would account that as not exhibiting good grammar.
I have noticed that Qwen3 will happily think in English or Chinese, with virtually no Chinese contamination in the English. But if I ask it a question in French, it typically thinks entirely in English.
It would be really interesting to try more languages and a wider variety of questions, to see if there’s a clear pattern here.