I don’t see it that way. Broad and deep knowledge is as useful as ever, and LLMs are no substitutes for it.
This anecdote comes to mind:
Dr. Pauling taught first-year chemistry at Cal Tech for many years. All of his exams were closed book, and the students complained bitterly. Why should they have to memorize Boltzmann’s constant when they could easily look it up when they needed it? I paraphrase Mr. Pauling’s response: I was always amazed at the lack of insight this showed. It’s what you have in your memory bank—what you can recall instantly—that’s important. If you have to look it up, it’s worthless for creative thinking.
He proceeded to give an example. In the mid-1930s, he was riding a train from London to Oxford. To pass the time, he came across an article in the journal, Nature, arguing that proteins were amorphous globs whose 3D structure could never be deduced. He instantly saw the fallacy in the argument—because of one isolated stray fact in his memory bank—the key chemical bond in the protein backbone did not freely rotate, as was argued. Linus knew from his college days that the peptide bond had to be rigid and coplanar.
He began doodling, and by the time he reached Oxford, he had discovered the alpha helix. A year later, his discovery was published in Nature. In 1954, Linus won the Nobel Prize in Chemistry for it. The discovery lies at the core of many of the great advances in medicine and pharmacology that have occurred since.
This fits with my experience. If you’re trying to do some nontrivial research or planning, you need to have a vast repository of high-quality mental models of diverse phenomena in your head, able to be retrieved in a split-second and immediately integrated into your thought process. If you need to go ask an LLM about something, this breaks the flow state, derails your trains of thought, and just takes dramatically more time. Not to mention unknown unknowns: how can you draw on an LLM’s knowledge about X if you don’t even know that X is a thing?
IMO, the usefulness of LLMs is in improving your ability to build broad and deep internal knowledge bases, rather than in substituting these internal knowledge bases.
This is probably right. Though perhaps one special case of my point remains correct: the value of a generalist as a member of a team may be somewhat reduced.
The value of a generalist with shallow knowledge is reduced, but you get a chance to become a generalist with relatively deep knowledge of many things. You already know the basics, so you can start the conversation with LLMs to learn more (and knowing the basics will help you figure out when the LLM hallucinates).
That moment when you’ve invested in building a broad and deep knowledge base instead of your own agency and then LLMs are invented.
it hurts
I don’t see it that way. Broad and deep knowledge is as useful as ever, and LLMs are no substitutes for it.
This anecdote comes to mind:
This fits with my experience. If you’re trying to do some nontrivial research or planning, you need to have a vast repository of high-quality mental models of diverse phenomena in your head, able to be retrieved in a split-second and immediately integrated into your thought process. If you need to go ask an LLM about something, this breaks the flow state, derails your trains of thought, and just takes dramatically more time. Not to mention unknown unknowns: how can you draw on an LLM’s knowledge about X if you don’t even know that X is a thing?
IMO, the usefulness of LLMs is in improving your ability to build broad and deep internal knowledge bases, rather than in substituting these internal knowledge bases.
This is probably right. Though perhaps one special case of my point remains correct: the value of a generalist as a member of a team may be somewhat reduced.
The value of a generalist with shallow knowledge is reduced, but you get a chance to become a generalist with relatively deep knowledge of many things. You already know the basics, so you can start the conversation with LLMs to learn more (and knowing the basics will help you figure out when the LLM hallucinates).