Thanks to the authors for the additional experiments and code, and to you for your replication and write-up!
IIUC, for RR makes use of LoRA adapters whereas HP is only a LR probe, meaning that RR is optimizing over a more expressive space. Does it seem likely to you that RR would beat an HP implementation that jointly optimizes LoRA adapters + a linear classification head (out of some layer) so that the model retains performance while also having the linear probe function as a good harmfulness classifier?
(It’s been a bit since I read the paper, so sorry if I’m missing something here.)
Why would it 2x the cost of inference? To be clear, my suggested baseline is “attach exactly the same LoRA adapters that were used for RR, plus one additional linear classification head, then train on an objective which is similar to RR but where the rerouting loss is replaced by a classification loss for the classification head.” Explicitly this is to test the hypothesis that RR only worked better than HP because it was optimizing more parameters (but isn’t otherwise meaningfully different from probing).
(Note that LoRA adapters can be merged into model weights for inference.)
(I agree that you could also just use more expressive probes, but I’m interested in this as a baseline for RR, not as a way to improve robustness per se.)