I think this is correct. Blogging isn’t the easy end of the spectrum, it actually involves solving novel problems of finding new useful viewpoints. But this answer it leaves out answering the core question: what advances will allow LLMs to produce original seeing?
If you think about how humans typically produce original seeing, I think there are relatively straightforward ways that an LLM-based cognitive architecture that can direct its own investigations, “think” about what it’d found, and remember (using continuous learning of some sort) what it’s found can do the same thing.
Or of course sometimes they just pop out a perspective that counts as original seeing relative to most readers. Then the challenge is identifying whether it is really of interest to enough of the target audience, which usually involves some elaborate thinking including chains of thought, goal-directed agency (and “executive function” or metacognition of noticing whether the current CoT is on-task and redirecting it if not, then aggregating all of the on-task CoT’s to draw conclusions...
It’s not clear when this will happen, because there isn’t much economic payoff for improving original seeing.
This is very much the same process as solving novel problems. LLMs rarely if ever do this. But there’s a clearer economic payoff: people want agents that do useful work without constant human intervention by solving minor novel problems of how to get past dead-ends in their current approaches.
So I’d predict we get human-level blogging in 1-2 years (that’s human-level, which is, on average, terrible—it just has some notable standouts).
For more detail, see almost all of my other posts. :)
I think this is correct. Blogging isn’t the easy end of the spectrum, it actually involves solving novel problems of finding new useful viewpoints. But this answer it leaves out answering the core question: what advances will allow LLMs to produce original seeing?
If you think about how humans typically produce original seeing, I think there are relatively straightforward ways that an LLM-based cognitive architecture that can direct its own investigations, “think” about what it’d found, and remember (using continuous learning of some sort) what it’s found can do the same thing.
Or of course sometimes they just pop out a perspective that counts as original seeing relative to most readers. Then the challenge is identifying whether it is really of interest to enough of the target audience, which usually involves some elaborate thinking including chains of thought, goal-directed agency (and “executive function” or metacognition of noticing whether the current CoT is on-task and redirecting it if not, then aggregating all of the on-task CoT’s to draw conclusions...
It’s not clear when this will happen, because there isn’t much economic payoff for improving original seeing.
This is very much the same process as solving novel problems. LLMs rarely if ever do this. But there’s a clearer economic payoff: people want agents that do useful work without constant human intervention by solving minor novel problems of how to get past dead-ends in their current approaches.
So I’d predict we get human-level blogging in 1-2 years (that’s human-level, which is, on average, terrible—it just has some notable standouts).
For more detail, see almost all of my other posts. :)