I’m not sure all/most unlearning work is useless, but it seems like it suffers from a “use case” problem.
When is it better to attempt unlearning rather than censor the bad info before training on it?
Seems to me like there is a very narrow window where you have created a model, but got new information about what sort of information it works be bad for the model to know, and now need to fix the model before deploying it.
Why not just be more reasonable and cautious about filtering the training data in the first place?
I’m not sure all/most unlearning work is useless, but it seems like it suffers from a “use case” problem. When is it better to attempt unlearning rather than censor the bad info before training on it?
Seems to me like there is a very narrow window where you have created a model, but got new information about what sort of information it works be bad for the model to know, and now need to fix the model before deploying it.
Why not just be more reasonable and cautious about filtering the training data in the first place?