Arguably the same is true of modern LLMs. Even a base model is not a “generic person” but a “generic text”. The model ranke-4b is also fine-tuned (at least on question formats and to stay in character). So it’s a reconstructed version
The base-model is an unpolished diamond: it is full of raw potential, but extracting its knowledge is not always an effortless undertaking since it does not respond to questions in a chat-formatted manner.
Arguably the same is true of modern LLMs. Even a base model is not a “generic person” but a “generic text”. The model ranke-4b is also fine-tuned (at least on question formats and to stay in character). So it’s a reconstructed version