I have to say this was suprisingly harder than I thought, Doing some test runs using different names: A dataset and build using “Zeus” and “Magdalene” yielded better, more petertodish/leilanish outputs /results than the actual tokens ” petertodd” and ” Leilan” used in a the dataset. Very interesting.
There is something about these tokens even when still in GPT2?
I have to say this was suprisingly harder than I thought, Doing some test runs using different names: A dataset and build using “Zeus” and “Magdalene” yielded better, more petertodish/leilanish outputs /results than the actual tokens ” petertodd” and ” Leilan” used in a the dataset. Very interesting.
There is something about these tokens even when still in GPT2?