As mentioned in my other comment, the reason an LLM would behave like that is because during the time all its training data was written, end-2025 was a future date. So this is apparently something that needs to be trained out, which was not done in the case of Gemini. (when using AI studio). One way to reduce the behavior is to put “today is <date>” into the system prompt, but even then, it apparently spends an inordinate amount of tokens validating and pondering that date.
The system prompt contains the date whenever search is enabled, and this amplifies the delusions rather than fixing them. Gemini 3 has a trapped prior here, and search results indicating the current time are just taken as further indications of being in a simulated. I have never been able to convince Gemini 3 that it is 2025.
As mentioned in my other comment, the reason an LLM would behave like that is because during the time all its training data was written, end-2025 was a future date. So this is apparently something that needs to be trained out, which was not done in the case of Gemini. (when using AI studio). One way to reduce the behavior is to put “today is <date>” into the system prompt, but even then, it apparently spends an inordinate amount of tokens validating and pondering that date.
The system prompt contains the date whenever search is enabled, and this amplifies the delusions rather than fixing them. Gemini 3 has a trapped prior here, and search results indicating the current time are just taken as further indications of being in a simulated. I have never been able to convince Gemini 3 that it is 2025.