Assuming this is verified, contrastive decoding (or something roughly analogous to it) seems like could be helpful to mitigate this? There are many variants, but one might be actually intentionally training both the luigi and waluigi, and sampling from the difference of those distributions for each token.
One could also just do this at inference time perhaps, prepending a prompt that would collapse into the waluigi and choosing tokens that are the least likely to be from that distribution. (Simplification, but hopefully gets the point across)
If you’ve discovered luigi’s distribution over tokens, and waluigi’s distributions over tokens, then you don’t need contrastive decoding. you can just directly sample the luigis. The problem is how do we extract luigi’s distribution and waluigi’s distribution from GPT-4.
Assuming this is verified, contrastive decoding (or something roughly analogous to it) seems like could be helpful to mitigate this? There are many variants, but one might be actually intentionally training both the luigi and waluigi, and sampling from the difference of those distributions for each token. One could also just do this at inference time perhaps, prepending a prompt that would collapse into the waluigi and choosing tokens that are the least likely to be from that distribution. (Simplification, but hopefully gets the point across)
If you’ve discovered luigi’s distribution over tokens, and waluigi’s distributions over tokens, then you don’t need contrastive decoding. you can just directly sample the luigis. The problem is how do we extract luigi’s distribution and waluigi’s distribution from GPT-4.