“ofc, once generalization is cracked then it’s all over. but in the meantime, this could persist for some duration.”
I don’t agree with this framing. The models have been getting steadily better at generalizing, and I don’t think “generalization” is an atomic ability that can be “cracked.”
Humans are much better at generalization (they are more general and much more sample efficient for text) than LLMs, which is caused by us implementing some learning algorithm that is more general. Why couldn’t this be “cracked”?
“ofc, once generalization is cracked then it’s all over. but in the meantime, this could persist for some duration.”
I don’t agree with this framing. The models have been getting steadily better at generalizing, and I don’t think “generalization” is an atomic ability that can be “cracked.”
ok, replace with “once we steadily sidle up to human level generalization”
Humans are much better at generalization (they are more general and much more sample efficient for text) than LLMs, which is caused by us implementing some learning algorithm that is more general. Why couldn’t this be “cracked”?