No, you’re absolutely right. I actually tried asking GPT-5 about a w3w location and even with web search on it concluded that it was probably sea, because it couldn’t find anything at that address… and the address was in Westminster, London.
So despite words being more of the “language” of an LLM, it was still much much worse at it for all the other reasons you said.
There is also fixphrase.com, where neighboring squares typically share the first three out of four words, so I suspect that might work better in theory, though it’s probably absent from the training data in practice.
No, you’re absolutely right. I actually tried asking GPT-5 about a w3w location and even with web search on it concluded that it was probably sea, because it couldn’t find anything at that address… and the address was in Westminster, London.
So despite words being more of the “language” of an LLM, it was still much much worse at it for all the other reasons you said.
There is also fixphrase.com, where neighboring squares typically share the first three out of four words, so I suspect that might work better in theory, though it’s probably absent from the training data in practice.