IMO, excellent blogging happens when people notice connections between ideas while experiencing the world. Good blogging often feels like a side effect of the author’s learning process. The author was there when they formed an idea, there when they had experiences with the concept of the idea, there when they revised the idea… so they can report on the experience of making a lasting change to their understanding. Or if not directly reporting on the experience, they can at least generate novel entropy which increases a human reader’s odds of following the author to a particular new or interesting concept which might not be directly transmissible through language.
LLMs appear to lack the kind of “background” thought process that causes one to notice connections one isn’t explicitly seeking. They aren’t really “there” during the processes in which they assimilate information, at least not in the way that I’m still myself while reading a textbook.
I think if we wanted to get blogs out of LLMs that are good in the ways that human blogs are good, the most promising approach would be to try generating blogs as a side effect of their training processes rather than by prompting the finished assistants. But that touches the highly secretive stuff that the stewards of the most powerful models are least inclined to share with the world.
To step back a bit, though, do we need LLMs to be good at blogging? If we model blogs as demonstrations or tutorials of ways you can use a human brain, it doesn’t seem at all obvious that they’d keep their benefits when generated without a human in the loop.
IMO, excellent blogging happens when people notice connections between ideas while experiencing the world. Good blogging often feels like a side effect of the author’s learning process. The author was there when they formed an idea, there when they had experiences with the concept of the idea, there when they revised the idea… so they can report on the experience of making a lasting change to their understanding. Or if not directly reporting on the experience, they can at least generate novel entropy which increases a human reader’s odds of following the author to a particular new or interesting concept which might not be directly transmissible through language.
LLMs appear to lack the kind of “background” thought process that causes one to notice connections one isn’t explicitly seeking. They aren’t really “there” during the processes in which they assimilate information, at least not in the way that I’m still myself while reading a textbook.
I think if we wanted to get blogs out of LLMs that are good in the ways that human blogs are good, the most promising approach would be to try generating blogs as a side effect of their training processes rather than by prompting the finished assistants. But that touches the highly secretive stuff that the stewards of the most powerful models are least inclined to share with the world.
To step back a bit, though, do we need LLMs to be good at blogging? If we model blogs as demonstrations or tutorials of ways you can use a human brain, it doesn’t seem at all obvious that they’d keep their benefits when generated without a human in the loop.