I think there is a difference between finetuning and prompting in that in the prompting case, the LLM is aware that it’s taking part in a role playing scenario. With finetuning on synthetic documents, it is possible to make the LLM more deeply believe something. Maybe one could make the finetuning more sample efficient by instead distilling a prompted model. Another option could be using steering vectors, though I’m not sure that would work better than prompting.
I think there is a difference between finetuning and prompting in that in the prompting case, the LLM is aware that it’s taking part in a role playing scenario. With finetuning on synthetic documents, it is possible to make the LLM more deeply believe something. Maybe one could make the finetuning more sample efficient by instead distilling a prompted model. Another option could be using steering vectors, though I’m not sure that would work better than prompting.