“detailed episode guides for an old show not being in the training data”
This is incorrect. I’m the author of this test. The intention was to show data that we can prove is in the training set isn’t correctly surfaced by the LLM. So in this case, when it hallucinates or says “I don’t know”, it should know.
As to the model confidence, you might find what I recently wrote about “hallucinations are provably unsolvable” of interest under section “The Attempted Solutions to Solve AI Hallucinations” and the 2 linked papers within.
This is a very interesting correction—and I would appreciate some clarification as to how this being in the training set is actually proven. Web scrapers are not entirely predictable, this is a “far corners of fandom wikis” thing, and for most models there would be some filtering of the training corpus for diverse reasons. This is why I assumed this was a case of “not in training data, so answer is inferred from pop culture tropes”. (The inference typically invents an episode where mind reading was not real).
Now, I have seen two interesting exceptions not linked to the obvious “model uses web search” exception, but I suspect buth were explicitly done as a response to the article:
The OpenAI o3 model, called via API without a web search tool, comes up with other episodes where mind reading was logically a consequence of the plot devices (notably “Ring around Gilligan”), then with Seer Gilligan when prompted for more. This in my opinion goes together with o3 being benchmark-optimized in general—what you created is factually a (very small) benchmark, so I tjhink someone at OpenAI outright RLHF’ed it to one-up you.
There is a “GodGPT” pushing ads on xitter—I tested it and it immediately came up with Seer Gilligan. The devs won’t reveal what their base model is, and it responds with what I see as pseudo-spiritual nonsense to most other prompts. That nonsense outright denies any “grounding” exists, so I am guessing this is fine-tuning and not a web search. No idea whether the fine-tuning is in the base model or in the particular customisation.
And yeah, I agree hallucinations are likely not solveable in the general case. For the general case, the Google Gemini approach of “default to web search in case of doubt in every step” seems to me to be the closest approximation to a solution. (Gemini 2.5 Pro on the web UI of a paid account aces the Gilligan test and the thinking steps show it starts with a web search. It reports several sources, none of which are your article, but the thinking also lists an “identifying primary sources” step so maybe the article was there then got filtered out).
I am, however, interested in solving hallucinations for a particular subcase where all the base knowledge is provided in the content window. Thiw would help with documentation-based support agents, legal tertieval, and so on. Whether a full solution to this one would also produce better results than a non-LLM advanced search engine on this same dataset is an interesting question.
FYI—“Ring around Gilligan” is surfaced incorrectly. It is not about mind reading. It is about controlling another person through a device that makes them do whatever asked.
Although I can’t know specifically why some models are able to now answer the question, it isn’t unexpected that they would eventually. With more training and bigger models the statistical bell curve of what the model can surface does widen.
BTW, your primary use case is mine as well. Unfortunately, I have had no luck with reliability for processing knowledge in the context window. My best solution has been to prompt for direct citations for any document so I can easily verify if the result is a hallucination or not. Doesn’t stop hallucinations, but just helps me more quickly identify them.
I suspect training for such specific tasks might improve performance somewhat, but hallucinations will never go away on this type of architecture. I wrote something in detail recently about that here “AI Hallucinations: Proven Unsolvable—What Do We Do?”
Sorry for the late reply, my karma here is negative and I have a 3 day penalty on replies. For some reason everything I’ve posted here received lots of downvotes without comment. So I’ve mostly quit posting here.
Now, I have some ideas specifically about knowledge in the context window (in line with your “provide citations” but with more automated steps, in line with the “programmatic verification” that you mention in your article). I need to experiment before I can see if they work. And right now I’m stuck on getting an open source chat environment working in order to scaffold this task. (LibreChat just outright failed to create a user; OpenWebUI looks better but I’m probably sticking all my processing into LiteLLM or something like that, because finding hooks in these environments is not easy).
I won’t brag about idea details. Let me see if they work first.
Hallucinations about training knowledge cannot be solved. And I do suspect that your article is the primary reason some models answer correctly. There is a tendency to optimize for benchmarks and your article is a de facto benchmark.
(The “Ring around Gilligan” part is a typical “fandom debate”. I’ve had my share of those, not about this series of course, but boooy, Babylon 5 brings memories—I had [Team Anna Sheridan] in my Fidonet signature for some time. My suspicion is that “Ring around Gilligan” it is surfaced specifically because someone at OpenAI thinks the ring in question logically would allow mind-reading, and the rest is RLHF to one-up you)
“detailed episode guides for an old show not being in the training data”
This is incorrect. I’m the author of this test. The intention was to show data that we can prove is in the training set isn’t correctly surfaced by the LLM. So in this case, when it hallucinates or says “I don’t know”, it should know.
As to the model confidence, you might find what I recently wrote about “hallucinations are provably unsolvable” of interest under section “The Attempted Solutions to Solve AI Hallucinations” and the 2 linked papers within.
This is a very interesting correction—and I would appreciate some clarification as to how this being in the training set is actually proven. Web scrapers are not entirely predictable, this is a “far corners of fandom wikis” thing, and for most models there would be some filtering of the training corpus for diverse reasons. This is why I assumed this was a case of “not in training data, so answer is inferred from pop culture tropes”. (The inference typically invents an episode where mind reading was not real).
Now, I have seen two interesting exceptions not linked to the obvious “model uses web search” exception, but I suspect buth were explicitly done as a response to the article:
The OpenAI o3 model, called via API without a web search tool, comes up with other episodes where mind reading was logically a consequence of the plot devices (notably “Ring around Gilligan”), then with Seer Gilligan when prompted for more. This in my opinion goes together with o3 being benchmark-optimized in general—what you created is factually a (very small) benchmark, so I tjhink someone at OpenAI outright RLHF’ed it to one-up you.
There is a “GodGPT” pushing ads on xitter—I tested it and it immediately came up with Seer Gilligan. The devs won’t reveal what their base model is, and it responds with what I see as pseudo-spiritual nonsense to most other prompts. That nonsense outright denies any “grounding” exists, so I am guessing this is fine-tuning and not a web search. No idea whether the fine-tuning is in the base model or in the particular customisation.
And yeah, I agree hallucinations are likely not solveable in the general case. For the general case, the Google Gemini approach of “default to web search in case of doubt in every step” seems to me to be the closest approximation to a solution. (Gemini 2.5 Pro on the web UI of a paid account aces the Gilligan test and the thinking steps show it starts with a web search. It reports several sources, none of which are your article, but the thinking also lists an “identifying primary sources” step so maybe the article was there then got filtered out).
I am, however, interested in solving hallucinations for a particular subcase where all the base knowledge is provided in the content window. Thiw would help with documentation-based support agents, legal tertieval, and so on. Whether a full solution to this one would also produce better results than a non-LLM advanced search engine on this same dataset is an interesting question.
I used Infi-gram to prove the data exists in the training set as well as other prompts that could reveal the information exists. For example, LLMs sometimes could not answer the question directly, but when asked to list the episodes it could do so revealing the episode exists within the LLMs dataset.
FYI—“Ring around Gilligan” is surfaced incorrectly. It is not about mind reading. It is about controlling another person through a device that makes them do whatever asked.
Although I can’t know specifically why some models are able to now answer the question, it isn’t unexpected that they would eventually. With more training and bigger models the statistical bell curve of what the model can surface does widen.
BTW, your primary use case is mine as well. Unfortunately, I have had no luck with reliability for processing knowledge in the context window. My best solution has been to prompt for direct citations for any document so I can easily verify if the result is a hallucination or not. Doesn’t stop hallucinations, but just helps me more quickly identify them.
I suspect training for such specific tasks might improve performance somewhat, but hallucinations will never go away on this type of architecture. I wrote something in detail recently about that here “AI Hallucinations: Proven Unsolvable—What Do We Do?”
Sorry for the late reply, my karma here is negative and I have a 3 day penalty on replies. For some reason everything I’ve posted here received lots of downvotes without comment. So I’ve mostly quit posting here.
Understood, thanks!
Now, I have some ideas specifically about knowledge in the context window (in line with your “provide citations” but with more automated steps, in line with the “programmatic verification” that you mention in your article). I need to experiment before I can see if they work. And right now I’m stuck on getting an open source chat environment working in order to scaffold this task. (LibreChat just outright failed to create a user; OpenWebUI looks better but I’m probably sticking all my processing into LiteLLM or something like that, because finding hooks in these environments is not easy).
I won’t brag about idea details. Let me see if they work first.
Hallucinations about training knowledge cannot be solved. And I do suspect that your article is the primary reason some models answer correctly. There is a tendency to optimize for benchmarks and your article is a de facto benchmark.
(The “Ring around Gilligan” part is a typical “fandom debate”. I’ve had my share of those, not about this series of course, but boooy, Babylon 5 brings memories—I had [Team Anna Sheridan] in my Fidonet signature for some time. My suspicion is that “Ring around Gilligan” it is surfaced specifically because someone at OpenAI thinks the ring in question logically would allow mind-reading, and the rest is RLHF to one-up you)