I think such consistency training on outputs would result in a model that’s basically always eval aware on your training distribution, and after that point your consistency training has no gradient. Then you have this super eval-aware model, run it on your (held-out, OOD) evaluations, and hope that in those it’s just eval aware enough so you can make conclusions about the model behaving the same if it were in an eval or not. Is this the intent, or do you have a specific training method in mind that avoids this?
I think such consistency training on outputs would result in a model that’s basically always eval aware on your training distribution, and after that point your consistency training has no gradient. Then you have this super eval-aware model, run it on your (held-out, OOD) evaluations, and hope that in those it’s just eval aware enough so you can make conclusions about the model behaving the same if it were in an eval or not.
Is this the intent, or do you have a specific training method in mind that avoids this?