Deep Research was this for me, at first. Some of its summaries were just pleasant to read, they felt so information-dense and intelligent! Not like typical AI slop at all! But then it turned out most of it was just AI slop underneath anyway,
Can you elaborate on what you mean by this? Do you mean it’s hallucinating a ton underneath? Or that the writing is somehow bad? Or something else?
When reading LLM outputs, I tend to skim them. They’re light on relevant, non-obvious content. You can usually just kind of glance diagonally through their text and get the gist, because they tend to spend a lot of words saying nothing/repeating themselves/saying obvious inanities or extensions of what they’ve already said.
When I first saw Deep Research outputs, it didn’t read to me like this. Every sentence seemed to be insightful, dense with pertinent information.
Now I’ve adjusted to the way Deep Research phrases itself, and it reads same as any other LLM output. Too many words conveying too few ideas.
Not to say plenty of human writing isn’t similar kind of slop, and not to say some LLM outputs aren’t actually information-dense. But well-written human stuff is usually information-dense, and could have surprising twists of thought or rhetoric that demand you to actually properly read it. And LLM outputs – including, as it turns out, Deep Research’s – are usually very water-y.
Can you elaborate on what you mean by this? Do you mean it’s hallucinating a ton underneath? Or that the writing is somehow bad? Or something else?
When reading LLM outputs, I tend to skim them. They’re light on relevant, non-obvious content. You can usually just kind of glance diagonally through their text and get the gist, because they tend to spend a lot of words saying nothing/repeating themselves/saying obvious inanities or extensions of what they’ve already said.
When I first saw Deep Research outputs, it didn’t read to me like this. Every sentence seemed to be insightful, dense with pertinent information.
Now I’ve adjusted to the way Deep Research phrases itself, and it reads same as any other LLM output. Too many words conveying too few ideas.
Not to say plenty of human writing isn’t similar kind of slop, and not to say some LLM outputs aren’t actually information-dense. But well-written human stuff is usually information-dense, and could have surprising twists of thought or rhetoric that demand you to actually properly read it. And LLM outputs – including, as it turns out, Deep Research’s – are usually very water-y.