I have a general prediction that current-style LLMs, being inherently predictors of what a human would say, will eventually plateau at a relatively human level of ability to think and reason. Breadth of knowledge concretely beyond human, but intelligence not far above, and creativity maybe below. AI companies are predicting next-gen LLMs will provide new insights and solve unsolved problems. But genuine insight seems to require an ability to internally regenerate concepts from lower-level primitives (as mentioned in Yudkowsky’s “Truly Part Of You”). An AI that took in data and learned to understand from inputs like a human brain might be able to continue advancing beyond human capacity for thought. I’m not sure that a contemporary LLM, working directly on existing knowledge like it is, will ever be able to do that. Maybe I’ll be proven wrong soon.
Thanks! I hadn’t read that one before; it’s a good point that more intelligence is required to be able to predict what any specific person might say than the intelligence of that person themselves. Having said that, I’m not convinced that a model trained on human text being super-intelligent at predicting human text necessarily means it can break out above human-level thinking.
If we discovered an intelligent alien species tomorrow, would we expect LLMs to be able to predict their next word? I’m fairly confident that the answer is “only if they thought very much like we do, just in a different language.” Similarly, my suspicion is that a what-would-a-human-say predictor can never be a what-would-a-superintelligence-say predictor—or at least, only a predictor of what a human thinks a superintelligence would say.
I have a general prediction that current-style LLMs, being inherently predictors of what a human would say, will eventually plateau at a relatively human level of ability to think and reason. Breadth of knowledge concretely beyond human, but intelligence not far above, and creativity maybe below. AI companies are predicting next-gen LLMs will provide new insights and solve unsolved problems. But genuine insight seems to require an ability to internally regenerate concepts from lower-level primitives (as mentioned in Yudkowsky’s “Truly Part Of You”). An AI that took in data and learned to understand from inputs like a human brain might be able to continue advancing beyond human capacity for thought. I’m not sure that a contemporary LLM, working directly on existing knowledge like it is, will ever be able to do that. Maybe I’ll be proven wrong soon.
I think this is one of the standard rebuttals to this position: GPTs are Predictors, not Imitators
Thanks! I hadn’t read that one before; it’s a good point that more intelligence is required to be able to predict what any specific person might say than the intelligence of that person themselves. Having said that, I’m not convinced that a model trained on human text being super-intelligent at predicting human text necessarily means it can break out above human-level thinking.
If we discovered an intelligent alien species tomorrow, would we expect LLMs to be able to predict their next word? I’m fairly confident that the answer is “only if they thought very much like we do, just in a different language.” Similarly, my suspicion is that a what-would-a-human-say predictor can never be a what-would-a-superintelligence-say predictor—or at least, only a predictor of what a human thinks a superintelligence would say.
maybe not well, but at least better than gzip: