This seems like a somewhat difficult use case for LLMs. It may be a mistake to think of them as a database of the *entire contents* of the training data. Perhaps instead think of them as compressed amalgamations of the the general patterns in the training data? I’m not terribly surprised that random obscure quotes can get optimized away.
This seems like a somewhat difficult use case for LLMs. It may be a mistake to think of them as a database of the *entire contents* of the training data. Perhaps instead think of them as compressed amalgamations of the the general patterns in the training data? I’m not terribly surprised that random obscure quotes can get optimized away.
Yes, but it knows all Bostrom articles, maybe because it has seen the list a hundred times.