If the LLM is like a person, then yes, I agree. But if we take seriously the idea that LLMs are simulating processes that generate text, and can call up different simulations at need, then yes, the average prompt will call up a simulation corresponding to the intelligence associated with the average text in the training data. Somewhere in a large enough LLM, though, there should be a simulation with intelligence sufficient to write high-level physics papers, and great novels, and so on. This isn’t my field, maybe I’m completely misunderstanding the situation in all sorts of ways, but this is my current interpretation.
I personally disagree, because while the LLM can be very “smart” (as in, its text predicting capabilities may be vastly exceeding a human’s), it’s smart at the specialised task. Meanwhile, the simulacra it creates are not quite that smart. I doubt there’s any that can write great novels because the LLM wasn’t trained on a sufficient amount of great novels to extrapolate. There’s just not enough of those. The training set remains dominated by internet bullshit, not Tolstoj and Melville. As for scientific papers, the hard part is developing the theory, and LLMs don’t seem to perform well at formal logic and maths. I really don’t see one doing anything like that unaided. Essentially I get the sense that creativity-wise, an LLM can’t consistently produce something that is at the highest percentiles of human output, and falls back towards mediocrity as a default instead. Though I might be wrong and this might be more of a property of RLHF’d models than the pretrained one.
Fair enough. I’d like to see what happens if you could (maybe requiring a larger context window) prompt it with something like “Here’s a list of my rankings of a bunch of works by quality, write something about X that I’d consider high quality.”
If the LLM is like a person, then yes, I agree. But if we take seriously the idea that LLMs are simulating processes that generate text, and can call up different simulations at need, then yes, the average prompt will call up a simulation corresponding to the intelligence associated with the average text in the training data. Somewhere in a large enough LLM, though, there should be a simulation with intelligence sufficient to write high-level physics papers, and great novels, and so on. This isn’t my field, maybe I’m completely misunderstanding the situation in all sorts of ways, but this is my current interpretation.
I personally disagree, because while the LLM can be very “smart” (as in, its text predicting capabilities may be vastly exceeding a human’s), it’s smart at the specialised task. Meanwhile, the simulacra it creates are not quite that smart. I doubt there’s any that can write great novels because the LLM wasn’t trained on a sufficient amount of great novels to extrapolate. There’s just not enough of those. The training set remains dominated by internet bullshit, not Tolstoj and Melville. As for scientific papers, the hard part is developing the theory, and LLMs don’t seem to perform well at formal logic and maths. I really don’t see one doing anything like that unaided. Essentially I get the sense that creativity-wise, an LLM can’t consistently produce something that is at the highest percentiles of human output, and falls back towards mediocrity as a default instead. Though I might be wrong and this might be more of a property of RLHF’d models than the pretrained one.
Fair enough. I’d like to see what happens if you could (maybe requiring a larger context window) prompt it with something like “Here’s a list of my rankings of a bunch of works by quality, write something about X that I’d consider high quality.”