A related take I heard a while ago: LLMs have strongly superhuman declarative knowledge across countless different subject areas. Any human with that much knowledge would be able to come up with many new theories or insights from combining knowledge from different fields. But LLMs apparently can’t do this. They don’t seem to synthesize, integrate and systematize their knowledge much.
Though maybe they have some latent ability to do this, and they only need some special sort of fine-tuning to unlock it, similar to how reasoning training seems to elicit abilities the base models already have.
A related take I heard a while ago: LLMs have strongly superhuman declarative knowledge across countless different subject areas. Any human with that much knowledge would be able to come up with many new theories or insights from combining knowledge from different fields. But LLMs apparently can’t do this. They don’t seem to synthesize, integrate and systematize their knowledge much.
Though maybe they have some latent ability to do this, and they only need some special sort of fine-tuning to unlock it, similar to how reasoning training seems to elicit abilities the base models already have.