Pretty sure I’ve seen this particular case discussed here previously, and the conclusion was that actually they had published something related already, and fed it to the “co-scientist” AI. So it was synthesising/interpolating from information it had been given, rather than generating fully novel ideas.
Per NewScientist https://www.newscientist.com/article/2469072-can-googles-new-research-assistant-ai-give-scientists-superpowers/
However, the team did publish a paper in 2023 – which was fed to the system – about how this family of mobile genetic elements “steals bacteriophage tails to spread in nature”. At the time, the researchers thought the elements were limited to acquiring tails from phages infecting the same cell. Only later did they discover the elements can pick up tails floating around outside cells, too.
So one explanation for how the AI co-scientist came up with the right answer is that it missed the apparent limitation that stopped the humans getting it.
What is clear is that it was fed everything it needed to find the answer, rather than coming up with an entirely new idea. “Everything was already published, but in different bits,” says Penadés. “The system was able to put everything together.”
That was concerning the main hypothesis that agreed with their work. Unknown whether the same is also true for its additional hypotheses. But I’m sceptical by default of the claim that it couldn’t possibly have come from the training data, or that they definitely didn’t inadvertently hint at things with data they provided.
“I am feeling stressed about buying a plane ticket” would acknowledge that the stress is coming from within you as an individual, and doesn’t foreclose the possibility of instead not feeling stressed.