Thanks! I hadn’t read that one before; it’s a good point that more intelligence is required to be able to predict what any specific person might say than the intelligence of that person themselves. Having said that, I’m not convinced that a model trained on human text being super-intelligent at predicting human text necessarily means it can break out above human-level thinking.
If we discovered an intelligent alien species tomorrow, would we expect LLMs to be able to predict their next word? I’m fairly confident that the answer is “only if they thought very much like we do, just in a different language.” Similarly, my suspicion is that a what-would-a-human-say predictor can never be a what-would-a-superintelligence-say predictor—or at least, only a predictor of what a human thinks a superintelligence would say.
Thanks! I hadn’t read that one before; it’s a good point that more intelligence is required to be able to predict what any specific person might say than the intelligence of that person themselves. Having said that, I’m not convinced that a model trained on human text being super-intelligent at predicting human text necessarily means it can break out above human-level thinking.
If we discovered an intelligent alien species tomorrow, would we expect LLMs to be able to predict their next word? I’m fairly confident that the answer is “only if they thought very much like we do, just in a different language.” Similarly, my suspicion is that a what-would-a-human-say predictor can never be a what-would-a-superintelligence-say predictor—or at least, only a predictor of what a human thinks a superintelligence would say.
maybe not well, but at least better than gzip: