Most of the well-known LLMs are absurdly sycophantic, so I would most certainly not trust them over whether an idea is good.
They’re also unreliable on whether it’s right, at least on obscure topics, as when they don’t know they take what’s in the prompt and just assume it must be right.
====
I seem to have basically reinvented how Deep Research AI works recently, as the completely obvious thing you would think of doing, which is hooking up LLMs to a framework that can pull in search results, has in fact already been done by the AI companies. I make no claim of novelty here: this is just the totally obvious “ok, so I have an LLM. Great. How can I get it to give a sensible answer to my question?” And, of course, everyone and their dog is doing it.
I frequently find myself being the reviewer for conference paper submissions where the result is correct, but not interesting. The referee feedback form usually has a tick box for this.
The introduction section in your paper needs to convey “why does anyone care whether this is true or not?”