My own understanding is that other than maybe writing code, no one has actually given LLMs the kind of training a talented human gets towards becoming the kind of person capable of performing novel and useful intellectual work. An LLM has a lot of knowledge, but knowledge isn’t what makes useful and novel intellectual work achievable. A non-reasoning model gives you the equivalent of a top-of-mind answer. A reasoning model with a large context window and chain of thought can do better, and solve more complex problems, but still mostly those within the limits of a newly hired college or grad student.
I genuinely don’t know whether an LLM with proper training can do novel intellectual work at current capabilities levels. To find out in a way I’d find convincing would take someone giving it the hundreds of thousands of dollars and subjective years’ worth of guidance and feedback and iteration that humans get. And really, you’d have to do this at least hundreds of times, for different fields and with different pedagogical methods, to even slightly satisfactorily demonstrate a “no,” because 1) most humans empirically fail at this, and 2) those that succeed don’t all do so in the same field or by the same path.
Fair enough, thanks.
My own understanding is that other than maybe writing code, no one has actually given LLMs the kind of training a talented human gets towards becoming the kind of person capable of performing novel and useful intellectual work. An LLM has a lot of knowledge, but knowledge isn’t what makes useful and novel intellectual work achievable. A non-reasoning model gives you the equivalent of a top-of-mind answer. A reasoning model with a large context window and chain of thought can do better, and solve more complex problems, but still mostly those within the limits of a newly hired college or grad student.
I genuinely don’t know whether an LLM with proper training can do novel intellectual work at current capabilities levels. To find out in a way I’d find convincing would take someone giving it the hundreds of thousands of dollars and subjective years’ worth of guidance and feedback and iteration that humans get. And really, you’d have to do this at least hundreds of times, for different fields and with different pedagogical methods, to even slightly satisfactorily demonstrate a “no,” because 1) most humans empirically fail at this, and 2) those that succeed don’t all do so in the same field or by the same path.