doing lengthy research and summarizing it is important work but not typically what I associate with “blogging”. But I think pulling that together into an attractive product uses much the same cognitive skills as producing original seeing. The missing step in the process you describe is figuring out when the research did produce surprising insights, which might be a class of novel problems (unless a general formulaic approach works and someone scaffolds that in). To the extent it doesn’t require solving novel problems, I think it’s predictably easier than quality blogging that doesn’t rely on research for the novel insights.
“The missing step in the process you describe is figuring out when the research did produce surprising insights, which might be a class of novel problems (unless a general formulaic approach works and someone scaffolds that in).”
-> I feel optimistic about the ability to use prompts to get us fairly far with this. More powerful/agentic systems will help a lot to actually execute those prompts at scale, but the core technical challenge seems like it could be fairly straightforward. I’ve been experimenting with LLMs to try to detect what information that they could come up with that would later surprise them. I think this is fairly measurable.
doing lengthy research and summarizing it is important work but not typically what I associate with “blogging”. But I think pulling that together into an attractive product uses much the same cognitive skills as producing original seeing. The missing step in the process you describe is figuring out when the research did produce surprising insights, which might be a class of novel problems (unless a general formulaic approach works and someone scaffolds that in). To the extent it doesn’t require solving novel problems, I think it’s predictably easier than quality blogging that doesn’t rely on research for the novel insights.
“The missing step in the process you describe is figuring out when the research did produce surprising insights, which might be a class of novel problems (unless a general formulaic approach works and someone scaffolds that in).”
-> I feel optimistic about the ability to use prompts to get us fairly far with this. More powerful/agentic systems will help a lot to actually execute those prompts at scale, but the core technical challenge seems like it could be fairly straightforward. I’ve been experimenting with LLMs to try to detect what information that they could come up with that would later surprise them. I think this is fairly measurable.