I agree, and am also confused with the idea that LLMs will be able to bootstrap something more intelligent.
My day job is a technical writer. I also do a bit of DevOps stuff. This combo ought to be the most LLM-able of all, yet I frequently find myself giving up on trying to tease out an answer from an LLM. And I’m far from the edge of my field!
So how exactly do people on the edge of their field make better use of LLMs, and expect to make qualitative improvements?
Feels like it’ll have to be humans to make algorithmic improvements, at least up until a point.
I agree, and am also confused with the idea that LLMs will be able to bootstrap something more intelligent.
My day job is a technical writer. I also do a bit of DevOps stuff. This combo ought to be the most LLM-able of all, yet I frequently find myself giving up on trying to tease out an answer from an LLM. And I’m far from the edge of my field!
So how exactly do people on the edge of their field make better use of LLMs, and expect to make qualitative improvements?
Feels like it’ll have to be humans to make algorithmic improvements, at least up until a point.