one medium term future that still seems possible is that models continue to be bad at generalization, and so a huge fraction of the economy is AI data labelling for various extremely niche or brand new areas. a world where new problems are solved once by humans and the solution reused for near-free forever via AI.
ofc, once generalization is cracked then it’s all over. but in the meantime, this could persist for some duration.
“ofc, once generalization is cracked then it’s all over. but in the meantime, this could persist for some duration.”
I don’t agree with this framing. The models have been getting steadily better at generalizing, and I don’t think “generalization” is an atomic ability that can be “cracked.”
Humans are much better at generalization (they are more general and much more sample efficient for text) than LLMs, which is caused by us implementing some learning algorithm that is more general. Why couldn’t this be “cracked”?
I feel like it’s more precise to say “extrapolation”, since what you are gesturing at is that humans have to be the ones to “push the frontier” but once a task has been figured out, it (and anything sufficiently similar) can be solved by AI generalizing in an interpolative way
one medium term future that still seems possible is that models continue to be bad at generalization, and so a huge fraction of the economy is AI data labelling for various extremely niche or brand new areas. a world where new problems are solved once by humans and the solution reused for near-free forever via AI.
ofc, once generalization is cracked then it’s all over. but in the meantime, this could persist for some duration.
“ofc, once generalization is cracked then it’s all over. but in the meantime, this could persist for some duration.”
I don’t agree with this framing. The models have been getting steadily better at generalizing, and I don’t think “generalization” is an atomic ability that can be “cracked.”
ok, replace with “once we steadily sidle up to human level generalization”
Humans are much better at generalization (they are more general and much more sample efficient for text) than LLMs, which is caused by us implementing some learning algorithm that is more general. Why couldn’t this be “cracked”?
I feel like it’s more precise to say “extrapolation”, since what you are gesturing at is that humans have to be the ones to “push the frontier” but once a task has been figured out, it (and anything sufficiently similar) can be solved by AI generalizing in an interpolative way