When reading LLM outputs, I tend to skim them. They’re light on relevant, non-obvious content. You can usually just kind of glance diagonally through their text and get the gist, because they tend to spend a lot of words saying nothing/repeating themselves/saying obvious inanities or extensions of what they’ve already said.
When I first saw Deep Research outputs, it didn’t read to me like this. Every sentence seemed to be insightful, dense with pertinent information.
Now I’ve adjusted to the way Deep Research phrases itself, and it reads same as any other LLM output. Too many words conveying too few ideas.
Not to say plenty of human writing isn’t similar kind of slop, and not to say some LLM outputs aren’t actually information-dense. But well-written human stuff is usually information-dense, and could have surprising twists of thought or rhetoric that demand you to actually properly read it. And LLM outputs – including, as it turns out, Deep Research’s – are usually very water-y.
When reading LLM outputs, I tend to skim them. They’re light on relevant, non-obvious content. You can usually just kind of glance diagonally through their text and get the gist, because they tend to spend a lot of words saying nothing/repeating themselves/saying obvious inanities or extensions of what they’ve already said.
When I first saw Deep Research outputs, it didn’t read to me like this. Every sentence seemed to be insightful, dense with pertinent information.
Now I’ve adjusted to the way Deep Research phrases itself, and it reads same as any other LLM output. Too many words conveying too few ideas.
Not to say plenty of human writing isn’t similar kind of slop, and not to say some LLM outputs aren’t actually information-dense. But well-written human stuff is usually information-dense, and could have surprising twists of thought or rhetoric that demand you to actually properly read it. And LLM outputs – including, as it turns out, Deep Research’s – are usually very water-y.