Nice work! Since you cite our LEACE paper, I was wondering if you’ve tried burning LEACE into the weights of a model just like you burn an orthogonal projection into the weights here? It should work at least as well, if not better, since LEACE will perturb the activations less.
Nitpick: I wish you would use a word other than “orthogonalization” since it sounds like you’re saying that you’re making the weight matrix an orthogonal matrix. Why not LoRACS (Low Rank Adaptation Concept Erasure)?
I do respectfully disagree here. I think the verb “orthogonalize” is just confusing. I also don’t think the distinction between optimization and no optimization is very important. What you’re actually doing is orthogonally projecting the weight matrices onto the orthogonal complement of the direction.