Why you used Chinese models? You can use models with larger context window and put more data about Borges, even Claude Code itself (instructing it never look on the original text of the story). While the result unlikely to be verbatim, a close coincident would mean that you have a good LLM-model of Borges (sideload).
If you fine-tuned open LLM on the story, how you prevent pure memorizing and how you can distinguish memorized output from the original writing?
Why you used Chinese models? You can use models with larger context window and put more data about Borges, even Claude Code itself (instructing it never look on the original text of the story). While the result unlikely to be verbatim, a close coincident would mean that you have a good LLM-model of Borges (sideload).
If you fine-tuned open LLM on the story, how you prevent pure memorizing and how you can distinguish memorized output from the original writing?
Claude isn’t open-source. Information just wants to be free!
Yes, but it has strongest predictive power.
I made similar experiments with Nabokov’s story Spring in Fialta. I put first period and asked to continue in Nabokov’s style.