I personally disagree, because while the LLM can be very “smart” (as in, its text predicting capabilities may be vastly exceeding a human’s), it’s smart at the specialised task. Meanwhile, the simulacra it creates are not quite that smart. I doubt there’s any that can write great novels because the LLM wasn’t trained on a sufficient amount of great novels to extrapolate. There’s just not enough of those. The training set remains dominated by internet bullshit, not Tolstoj and Melville. As for scientific papers, the hard part is developing the theory, and LLMs don’t seem to perform well at formal logic and maths. I really don’t see one doing anything like that unaided. Essentially I get the sense that creativity-wise, an LLM can’t consistently produce something that is at the highest percentiles of human output, and falls back towards mediocrity as a default instead. Though I might be wrong and this might be more of a property of RLHF’d models than the pretrained one.
Fair enough. I’d like to see what happens if you could (maybe requiring a larger context window) prompt it with something like “Here’s a list of my rankings of a bunch of works by quality, write something about X that I’d consider high quality.”
I personally disagree, because while the LLM can be very “smart” (as in, its text predicting capabilities may be vastly exceeding a human’s), it’s smart at the specialised task. Meanwhile, the simulacra it creates are not quite that smart. I doubt there’s any that can write great novels because the LLM wasn’t trained on a sufficient amount of great novels to extrapolate. There’s just not enough of those. The training set remains dominated by internet bullshit, not Tolstoj and Melville. As for scientific papers, the hard part is developing the theory, and LLMs don’t seem to perform well at formal logic and maths. I really don’t see one doing anything like that unaided. Essentially I get the sense that creativity-wise, an LLM can’t consistently produce something that is at the highest percentiles of human output, and falls back towards mediocrity as a default instead. Though I might be wrong and this might be more of a property of RLHF’d models than the pretrained one.
Fair enough. I’d like to see what happens if you could (maybe requiring a larger context window) prompt it with something like “Here’s a list of my rankings of a bunch of works by quality, write something about X that I’d consider high quality.”